16 research outputs found

    Power Reduction of OLED Displays by Tone Mapping Based on Helmholtz-Kohlrausch Effect

    Get PDF
    The Helmholtz-Kohlraush effect is a visual characteristic that humans perceive color having higher saturation as brighter. In the proposed method, the pixel value is reduced by increasing the saturation while maintaining the hue and value of HSV color space, resulting in power saving of OLED displays since the power consumption of OLED displays directly depends on the pixel value. Although the luminance decreases, brightness of image is maintained by the Helmholtz-Kohlraush effect. In order to suppress excessive increase of saturation, the increase factor of saturation is reduced with an increase in brightness. As maximum increase factor of saturation, kMAX, increases, more power is reduced but unpleasant color change takes place. From the subjective evaluation experiment with the 23 test images consisting of skin, natural and non-natural images, it is found that kMAX is less than 2.0 to suppress the unpleasant color change. When kMAX is 2.0, the power saving is 8.0%. The effectiveness of the proposed technique is confirmed by using a smart phone having 4.5 inches diagonal RGB AMOLED display

    Energy-aware adaptive solutions for multimedia delivery to wireless devices

    Get PDF
    The functionality of smart mobile devices is improving rapidly but these devices are limited in terms of practical use because of battery-life. This situation cannot be remedied by simply installing batteries with higher capacities in the devices. There are strict limitations in the design of a smartphone, in terms of physical space, that prohibit this “quick-fix” from being possible. The solution instead lies with the creation of an intelligent, dynamic mechanism for utilizing the hardware components on a device in an energy-efficient manner, while also maintaining the Quality of Service (QoS) requirements of the applications running on the device. This thesis proposes the following Energy-aware Adaptive Solutions (EASE): 1. BaSe-AMy: the Battery and Stream-aware Adaptive Multimedia Delivery (BaSe-AMy) algorithm assesses battery-life, network characteristics, video-stream properties and device hardware information, in order to dynamically reduce the power consumption of the device while streaming video. The algorithm computes the most efficient strategy for altering the characteristics of the stream, the playback of the video, and the hardware utilization of the device, dynamically, while meeting application’s QoS requirements. 2. PowerHop: an algorithm which assesses network conditions, device power consumption, neighboring node devices and QoS requirements to decide whether to adapt the transmission power or the number of hops that a device uses for communication. PowerHop’s ability to dynamically reduce the transmission power of the device’s Wireless Network Interface Card (WNIC) provides scope for reducing the power consumption of the device. In this case shorter transmission distances with multiple hops can be utilized to maintain network range. 3. A comprehensive survey of adaptive energy optimizations in multimedia-centric wireless devices is also provided. Additional contributions: 1. A custom video comparison tool was developed to facilitate objective assessment of streamed videos. 2. A new solution for high-accuracy mobile power logging was designed and implemented

    Power Consumption Analysis, Measurement, Management, and Issues:A State-of-the-Art Review of Smartphone Battery and Energy Usage

    Get PDF
    The advancement and popularity of smartphones have made it an essential and all-purpose device. But lack of advancement in battery technology has held back its optimum potential. Therefore, considering its scarcity, optimal use and efficient management of energy are crucial in a smartphone. For that, a fair understanding of a smartphone's energy consumption factors is necessary for both users and device manufacturers, along with other stakeholders in the smartphone ecosystem. It is important to assess how much of the device's energy is consumed by which components and under what circumstances. This paper provides a generalized, but detailed analysis of the power consumption causes (internal and external) of a smartphone and also offers suggestive measures to minimize the consumption for each factor. The main contribution of this paper is four comprehensive literature reviews on: 1) smartphone's power consumption assessment and estimation (including power consumption analysis and modelling); 2) power consumption management for smartphones (including energy-saving methods and techniques); 3) state-of-the-art of the research and commercial developments of smartphone batteries (including alternative power sources); and 4) mitigating the hazardous issues of smartphones' batteries (with a details explanation of the issues). The research works are further subcategorized based on different research and solution approaches. A good number of recent empirical research works are considered for this comprehensive review, and each of them is succinctly analysed and discussed

    Optimisation énergétique de processus de traitement du signal et ses applications au décodage vidéo

    Get PDF
    Consumer electronics offer today more and more features (video, audio, GPS, Internet) and connectivity means (multi-radio systems with WiFi, Bluetooth, UMTS, HSPA, LTE-advanced ... ). The power demand of these devices is growing for the digital part especially for the processing chip. To support this ever increasing computing demand, processor architectures have evolved with multicore processors, graphics processors (GPU) and ether dedicated hardware accelerators. However, the evolution of battery technology is itself slower. Therefore, the autonomy of embedded systems is now under a great pressure. Among the new functionalities supported by mobile devices, video services take a prominent place. lndeed, recent analyzes show that they will represent 70% of mobile Internet traffic by 2016. Accompanying this growth, new technologies are emerging for new services and applications. Among them HEVC (High Efficiency Video Coding) can double the data compression while maintaining a subjective quality equivalent to its predecessor, the H.264 standard. ln a digital circuit, the total power consumption is made of static power and dynamic power. Most of modern hardware architectures implement means to control the power consumption of the system. Dynamic Voltage and Frequency Scaling (DVFS) mainly reduces the dynamic power of the circuit. This technique aims to adapt the power of the processor (and therefore its consumption) to the actual load needed by the application. To control the static power, Dynamic Power Management (DPM or sleep modes) aims to stop the voltage supplies associated with specific areas of the chip. ln this thesis, we first present a model of the energy consumed by the circuit integrating DPM and DVFS modes. This model is generalized to multi-core integrated circuits and to a rapid prototyping tool. Thus, the optimal operating point of a circuit, i.e. the operating frequency and the number of active cores, is identified. Secondly, the HEVC application is integrated to a multicore architecture coupled with a sophisticated DVFS mechanism. We show that this application can be implemented efficiently on general purpose processors (GPP) while minimizing the power consumption. Finally, and to get further energy gain, we propose a modified HEVC decoder that is capable to tune its energy gains together with a decoding quality trade-off.Aujourd'hui, les appareils Ă©lectroniques offrent de plus en plus de fonctionnalitĂ©s (vidĂ©o, audio, GPS, internet) et des connectivitĂ©s variĂ©es (multi-systĂšmes de radio avec WiFi, Bluetooth, UMTS, HSPA, LTE-advanced ... ). La demande en puissance de ces appareils est donc grandissante pour la partie numĂ©rique et notamment le processeur de calcul. Pour rĂ©pondre Ă  ce besoin sans cesse croissant de nouvelles fonctionnalitĂ©s et donc de puissance de calcul, les architectures des processeurs ont beaucoup Ă©voluĂ© : processeurs multi-coeurs, processeurs graphiques (GPU) et autres accĂ©lĂ©rateurs matĂ©riels dĂ©diĂ©s. Cependant, alors que de nouvelles architectures matĂ©rielles peinent Ă  rĂ©pondre aux exigences de performance, l'Ă©volution de la technologie des batteries est quant Ă  elle encore plus lente. En consĂ©quence, l'autonomie des systĂšmes embarquĂ©s est aujourd'hui sous pression. Parmi les nouveaux services supportĂ©s par les terminaux mobiles, la vidĂ©o prend une place prĂ©pondĂ©rante. En effet, des analyses rĂ©centes de tendance montrent qu'elle reprĂ©sentera 70 % du trafic internet mobile dĂšs 2016. Accompagnant cette croissance, de nouvelles technologies Ă©mergent permettant de nouveaux services et applications. Parmi elles, HEVC (High Efficiency Video Coding) permet de doubler la compression de donnĂ©es tout en garantissant une qualitĂ© subjective Ă©quivalente Ă  son prĂ©dĂ©cesseur, la norme H.264. Dans un circuit numĂ©rique, la consommation provient de deux Ă©lĂ©ments: la puissance statique et la puissance dynamique. La plupart des architectures matĂ©rielles rĂ©centes mettent en oeuvre des procĂ©dĂ©s permettant de contrĂŽler la puissance du systĂšme. Le changement dynamique du couple tension/frĂ©quence appelĂ© Dynamic Voltage and Frequency Scaling (DVFS) agit principalement sur la puissance dynamique du circuit. Cette technique permet d'adapter la puissance du processeur (et donc sa consommation) Ă  la charge rĂ©elle nĂ©cessaire pour une application. Pour contrĂŽler la puissance statique, le Dynamic Power Management (DPM, ou modes de veille) consistant Ă  arrĂȘter les alimentations associĂ©es Ă  des zones spĂ©cifiques de la puce. Dans cette thĂšse, nous prĂ©sentons d'abord une modĂ©lisation de l'Ă©nergie consommĂ©e par le circuit intĂ©grant les modes DVFS et DPM. Cette modĂ©lisation est gĂ©nĂ©ralisĂ©e au circuit multi-coeurs et intĂ©grĂ©e Ă  un outil de prototypage rapide. Ainsi le point de fonctionnement optimal d'un circuit, la frĂ©quence de fonctionnement et le nombre de coeurs actifs, est identifiĂ©. Dans un second temps, l'application HEVC est intĂ©grĂ©e Ă  une architecture multi-coeurs avec une adaptation dynamique de la frĂ©quence de dĂ©veloppement. Nous montrons que cette application peut ĂȘtre implĂ©mentĂ©e efficacement sur des processeurs gĂ©nĂ©ralistes (GPP) tout en minimisant la puissance consommĂ©e. Enfin, et pour aller plus loin dans les gains en Ă©nergie, nous proposons une modification du dĂ©codeur HEVC qui permet Ă  un dĂ©codeur de baisser encore plus sa consommation en fonction du budget Ă©nergĂ©tique disponible localement

    Efficient Methods for Computational Light Transport

    Get PDF
    En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz estån presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gråficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imågenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Data-driven visual quality estimation using machine learning

    Get PDF
    Heutzutage werden viele visuelle Inhalte erstellt und sind zugĂ€nglich, was auf Verbesserungen der Technologie wie Smartphones und das Internet zurĂŒckzufĂŒhren ist. Es ist daher notwendig, die von den Nutzern wahrgenommene QualitĂ€t zu bewerten, um das Erlebnis weiter zu verbessern. Allerdings sind nur wenige der aktuellen QualitĂ€tsmodelle speziell fĂŒr höhere Auflösungen konzipiert, sagen mehr als nur den Mean Opinion Score vorher oder nutzen maschinelles Lernen. Ein Ziel dieser Arbeit ist es, solche maschinellen Modelle fĂŒr höhere Auflösungen mit verschiedenen DatensĂ€tzen zu trainieren und zu evaluieren. Als Erstes wird eine objektive Analyse der BildqualitĂ€t bei höheren Auflösungen durchgefĂŒhrt. Die Bilder wurden mit Video-Encodern komprimiert, hierbei weist AV1 die beste QualitĂ€t und Kompression auf. Anschließend werden die Ergebnisse eines Crowd-Sourcing-Tests mit einem Labortest bezĂŒglich BildqualitĂ€t verglichen. Weiterhin werden auf Deep Learning basierende Modelle fĂŒr die Vorhersage von Bild- und VideoqualitĂ€t beschrieben. Das auf Deep Learning basierende Modell ist aufgrund der benötigten Ressourcen fĂŒr die Vorhersage der VideoqualitĂ€t in der Praxis nicht anwendbar. Aus diesem Grund werden pixelbasierte VideoqualitĂ€tsmodelle vorgeschlagen und ausgewertet, die aussagekrĂ€ftige Features verwenden, welche Bild- und Bewegungsaspekte abdecken. Diese Modelle können zur Vorhersage von Mean Opinion Scores fĂŒr Videos oder sogar fĂŒr anderer Werte im Zusammenhang mit der VideoqualitĂ€t verwendet werden, wie z.B. einer Bewertungsverteilung. Die vorgestellte Modellarchitektur kann auf andere Videoprobleme angewandt werden, wie z.B. Videoklassifizierung, Vorhersage der QualitĂ€t von Spielevideos, Klassifikation von Spielegenres oder der Klassifikation von Kodierungsparametern. Ein wichtiger Aspekt ist auch die Verarbeitungszeit solcher Modelle. Daher wird ein allgemeiner Ansatz zur Beschleunigung von State-of-the-Art-VideoqualitĂ€tsmodellen vorgestellt, der zeigt, dass ein erheblicher Teil der Verarbeitungszeit eingespart werden kann, wĂ€hrend eine Ă€hnliche Vorhersagegenauigkeit erhalten bleibt. Die Modelle sind als Open Source veröffentlicht, so dass die entwickelten Frameworks fĂŒr weitere Forschungsarbeiten genutzt werden können. Außerdem können die vorgestellten AnsĂ€tze als Bausteine fĂŒr neuere Medienformate verwendet werden.Today a lot of visual content is accessible and produced, due to improvements in technology such as smartphones and the internet. This results in a need to assess the quality perceived by users to further improve the experience. However, only a few of the state-of-the-art quality models are specifically designed for higher resolutions, predict more than mean opinion score, or use machine learning. One goal of the thesis is to train and evaluate such machine learning models of higher resolutions with several datasets. At first, an objective evaluation of image quality in case of higher resolutions is performed. The images are compressed using video encoders, and it is shown that AV1 is best considering quality and compression. This evaluation is followed by the analysis of a crowdsourcing test in comparison with a lab test investigating image quality. Afterward, deep learning-based models for image quality prediction and an extension for video quality are proposed. However, the deep learning-based video quality model is not practically usable because of performance constrains. For this reason, pixel-based video quality models using well-motivated features covering image and motion aspects are proposed and evaluated. These models can be used to predict mean opinion scores for videos, or even to predict other video quality-related information, such as a rating distributions. The introduced model architecture can be applied to other video problems, such as video classification, gaming video quality prediction, gaming genre classification or encoding parameter estimation. Furthermore, one important aspect is the processing time of such models. Hence, a generic approach to speed up state-of-the-art video quality models is introduced, which shows that a significant amount of processing time can be saved, while achieving similar prediction accuracy. The models have been made publicly available as open source so that the developed frameworks can be used for further research. Moreover, the presented approaches may be usable as building blocks for newer media formats
    corecore