18 research outputs found
LAPSE: Low-Overhead Adaptive Power Saving and Contrast Enhancement for OLEDs
Organic Light Emitting Diode (OLED) display panels are becoming increasingly popular especially in mobile devices; one of the key characteristics of these panels is that their power consumption strongly depends on the displayed image. In this paper we propose LAPSE, a new methodology to concurrently reduce the energy consumed by an OLED display and enhance the contrast of the displayed image, that relies on image-specific pixel-by-pixel transformations. Unlike previous approaches, LAPSE focuses specifically on reducing the overheads required to implement the transformation at runtime. To this end, we propose a transformation that can be executed in real time, either in software, with low time overhead, or in a hardware accelerator with a small area and low energy budget. Despite the significant reduction in complexity, we obtain comparable results to those achieved with more complex approaches in terms of power saving and image quality. Moreover, our method allows to easily explore the full quality-versus-power tradeoff by acting on a few basic parameters; thus, it enables the runtime selection among multiple display quality settings, according to the status of the system
Low-Overhead Adaptive Brightness Scaling for Energy Reduction in OLED Displays
Organic Light Emitting Diode (OLED) is rapidly emerging as the mainstream mobile display technology. This is posing new challenges on the design of energy-saving solutions for OLED displays, specifically intended for interactive devices such as smartphones, smartwatches and tablets. To this date, the standard solution is brightness scaling. However, the amount of the scaling is typically set statically (either by the user, through a setting knob, or by the system in response to predefined events such as low-battery status) and independently of the displayed image.
In this work we describe a smart computing technique called Low-Overhead Adaptive Brightness Scaling (LABS), that overcomes these limitations. In LABS, the optimal content-dependent brightness scaling factor is determined automatically for each displayed image, on a frame-by-frame basis, with a low computational cost that allows real-time usage.
The basic form of LABS achieves more than 35% power reduction on average, when applied to different image datasets, while maintaining the Mean Structural Similarity Index (MSSIM) between the original and transformed images above 97%
Image Processing for Machine Vision Applications
L'abstract è presente nell'allegato / the abstract is in the attachmen
Design Techniques for Energy-Quality Scalable Digital Systems
Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff
Recommended from our members
A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry
Electronics industry is one of the fastest evolving, innovative, and most competitive industries. In order to meet the high consumption demands on electronics components, quality standards of the products must be well-maintained. Automatic optical inspection (AOI) is one of the non-destructive techniques used in quality inspection of various products. This technique is considered robust and can replace human inspectors who are subjected to dull and fatigue in performing inspection tasks. A fully automated optical inspection system consists of hardware and software setups. Hardware setup include image sensor and illumination settings and is responsible to acquire the digital image, while the software part implements an inspection algorithm to extract the features of the acquired images and classify them into defected and non-defected based on the user requirements. A sorting mechanism can be used to separate the defective products from the good ones. This article provides a comprehensive review of the various AOI systems used in electronics, micro-electronics, and opto-electronics industries. In this review the defects of the commonly inspected electronic components, such as semiconductor wafers, flat panel displays, printed circuit boards and light emitting diodes, are first explained. Hardware setups used in acquiring images are then discussed in terms of the camera and lighting source selection and configuration. The inspection algorithms used for detecting the defects in the electronic components are discussed in terms of the preprocessing, feature extraction and classification tools used for this purpose. Recent articles that used deep learning algorithms are also reviewed. The article concludes by highlighting the current trends and possible future research directions.Framework of the IQONIC Project; European Union’s Horizon 2020 Research and Innovation Program
Keep Your Eyes above the Ball: Investigation of Virtual Reality (VR) Assistive Gaming for Age-Related Macular Degeneration (AMD) Visual Training
Humans are beyond all visual beings since most of the outside information is gathered through the visual system. When the aging process starts, visual functional damages become more and more common and the risk of developing visual impairment is higher. Age-related macular degeneration (AMD) is one of the main afflictions that leads to severe damage to the optical system due to the aging process. The ones affected lose the ability to use the central part of vision, essential for accurate visual information processing.
Even if less accurate, peripheral vision remains unaffected, hence medical experts have developed training procedures to train patients to use peripheral vision instead to navigate their environment and continue their daily lives. This type of training is called eccentric viewing. However, there are several shortcomings in current approaches, such as not being engaging or individualizable enough nor cost and time-effective.
The main scope of this dissertation was to find out if more engaging and individualizable methods can be used for peripheral training of AMD patients. The current work used virtual reality (VR) gaming to deliver AMD training; the first time such an approach was used for eccentric viewing training. In combination with eye-tracking, real-time individualized assistance was also achieved. Thanks to an integrated eye-tracker in the headset, concentric gaze-contingent stimuli were used to redirect the eyes toward an eccentric location. The concentric feature allowed participants to choose freely and individually their peripheral focus point.
One study investigated the feasibility a VR system for individualized visual training of ophthalmic patients, two studies investigated two types of peripheral stimuli (three spatial cues and two optical distortions) and the last study was a case study looking into the feasibility of such an approach for a patient with late AMD.
Changes in gaze directionality were observed in all the last three studies for one specific spatial cue, a concentric ring. In accordance with the literature, the gaze was directed spontaneously toward the most effective peripheral position. The last study additionally proved gaming feasible for future testing of the elderly AMD population. The current work opened the road to more individualized and engaging interventions for eccentric viewing training for late AMD
Machine Learning in Sensors and Imaging
Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens
Development of an augmented reality guided computer assisted orthopaedic surgery system
Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively