41 research outputs found
Red-Eyes Removal through Cluster-Based Boosting on Gray Codes
Since the large diffusion of digital camera and mobile devices with embedded camera and flashgun, the redeyes artifacts have de facto become a critical problem. The technique herein described makes use of three main steps to identify and remove red eyes. First, red-eye candidates are extracted from the input image by using an image filtering pipeline. A set of classifiers is then learned on gray code features extracted in the clustered patches space and hence employed to distinguish between eyes and non-eyes patches. Specifically, for each cluster the gray code of the red-eyes candidate is computed and some discriminative gray code bits are selected employing a boosting approach. The selected gray code bits are used during the classification to discriminate between eye versus non-eye patches. Once red-eyes are detected, artifacts are removed through desaturation and brightness reduction. Experimental results on a large dataset of images demonstrate the effectiveness of the proposed pipeline that outperforms other existing solutions in terms of hit rates maximization, false positives reduction, and quality measure
Automatic Red-Eye Removal based on Sclera and Skin Tone Detection
It is well-known that taking portrait photographs with a built in camera may create a red-eye effect. This effect is caused by the light entering the subjectâs eye through the pupil and reflecting from the retina back to the sensor. These red eyes are probably one of the most important types of artifacts in portrait pictures. Many different techniques exist for removing these artifacts digitally after image capture. In most of the existing software tools, the user has to select the zone in which the red eye is located. The aim of our method is to automatically detect and correct the red eyes. Our algorithm detects the eye itself by finding the appropriate colors and shapes without input from the user. We use the basic knowledge that an eye is haracterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of âskinâ around the eye, we obtain a higher success rate than most of the tools we tested. Moreover, our algorithm works for any type of skin tone. The main goal of this algorithm is to accurately remove red eyes from a picture, while avoiding false positives completely, which is the biggest problem of camera integrated algorithms or distributed software tools. At the same time, we want to keep the false negative rate as low as possible. We implemented this algorithm in a web-based application to allow people to correct their images online
Automatic Red-Eye Removal based on Sclera and Skin Tone Detection
It is well-known that taking portrait photographs with a built in camera may create a red-eye effect. This effect is caused by the light entering the subjectâs eye through the pupil and reflecting from the retina back to the sensor. These red eyes are probably one of the most important types of artifacts in portrait pictures. Many different techniques exist for removing these artifacts digitally after image capture. In most of the existing software tools, the user has to select the zone in which the red eye is located. The aim of our method is to automatically detect and correct the red eyes. Our algorithm detects the eye itself by finding the appropriate colors and shapes without input from the user. We use the basic knowledge that an eye is haracterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of âskinâ around the eye, we obtain a higher success rate than most of the tools we tested. Moreover, our algorithm works for any type of skin tone. The main goal of this algorithm is to accurately remove red eyes from a picture, while avoiding false positives completely, which is the biggest problem of camera integrated algorithms or distributed software tools. At the same time, we want to keep the false negative rate as low as possible. We implemented this algorithm in a web-based application to allow people to correct their images online
Automatic Detection and Correction for Glossy Reflections in Digital Photograph
[[abstract]]The popularization of digital technology has made shooting digital photos and using related applications a part of daily life. However, the use of flash, to compensate for low atmospheric lighting, often leads to overexposure or glossy reflections. This study proposes an auto-detection and inpainting technique to correct overexposed faces in digital photography. This algorithm segments the skin color in the photo as well as uses face detection and capturing to determine candidate bright spots on the face. Based on the statistical analysis of color brightness and filtering, the bright spots are identified. Finally, bright spots are corrected through inpainting technology. From the experimental results, this study demonstrates the high accuracy and efficiency of the method
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Mooreâs law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
Nondestructive phenolic compounds measurement and origin discrimination of peated barley malt using near-infrared hyperspectral imagery and machine learning.
Quantifying phenolic compound in peated barley malt and discriminating its origin are essential to maintain the aroma of high-quality Scottish whisky during the manufacturing process. The content of the total phenol varies in peated barley malts, which is critical in measuring the associated peatiness level. Existing methods for measuring such phenols are destructive and/or time consuming. To tackle these issues, we propose in this paper a novel nondestructive system for fast and effective estimating the phenolic concentrations and discriminating their origins with the near-infrared hyperspectral imagery and machine learning. First, novel ways of data acquisition and normalization are developed for robustness. Then, the principal component analysis (PCA) and folded-PCA are fused for extracting the global and local spectral features, followed by the support vector machine (SVM) based origin discrimination and deep neural network based phenolic measurement. In total 27 categories of peated barley malts from eight suppliers are utilized to form thousands of spectral samples for modelling. A classification accuracy up to 99.5% and a squared-correlation-coefficient up to 98.57% are achieved in our experiments, outperforming a few state-of-the-art. These have fully demonstrated the efficacy of our system in automated phenolic measurement and origin discrimination to benefit the quality monitoring in the whisky industry