5 research outputs found

    Image Restoration by Matching Gradient Distributions

    Get PDF
    The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when used with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. We present an alternative deconvolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that a reconstructed image should have a gradient distribution similar to a reference distribution. In natural images, a reference distribution not only varies from one image to another, but also within an image depending on texture. We estimate a reference distribution directly from an input image for each texture segment. Our algorithm is able to restore rich mid-frequency textures. A large-scale user study supports the conclusion that our algorithm improves the visual realism of reconstructed images compared to those of MAP estimators

    Optimizing production and inventory decisions at all-you-care-to-eat facilities

    Get PDF
    Food service, feeding people outside of their home, is one of the largest industries in the world (Hartel and Klawitter, 2008). Restaurants, hospitals, military services, schools and universities are among those organizations providing these services. Management of a food service system requires operations management skill to operate successfully. A key element of food service is food production. Forecasting, demand, managing inventory and preparing menu items are key tasks in the food production process. In this research a series of three studies are presented to improve the food production system policies at an all you care to eat (AYCTE) facility. The first study examines two objectives, limiting its focus to foods for which all overproduction must be discarded (that is, leftovers cannot be saved and used in future periods). The first objective of this research is to present a novel method for estimating shortfall cost in a setting with no marginal revenue per satisfied unit of demand. Our methodology for estimating shortfall cost obtains results that are consistent with CDS management's stated aversion to shortfall, we estimate shortfall values are between 1.6 and 2.7 times larger than the procurement cost and between 30 and over 100 times larger than disposal costs. The second objective is to identify how optimal food production policies at an AYCTE facility would change were life cycle cost estimates of embodied greenhouse gas (GHG) emissions, including carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O), included in the disposal costs associated with overproduction. We found that optimal production levels are decreased significantly (18-25 percent) for food items with high environmental impacts (such as beef), and reduced less for foods with less embodied CO2. The second study considers a broader set of food types, including both foods that cannot be saved and stored as leftovers (as in the first study), and also foods for which overproduction can potentially be saved and served in the future as leftovers. Food service operations in an AYCTE environment need to consider two conflicting objectives: a desire to reduce overproduction food waste (and its corresponding environmental impacts), and an aversion to shortfalls. Similar to the first study, a challenge in analyzing such buffetstyle operations is the absence of any lost marginal revenue associated with lost sales that can be used to measure the shortfall cost, complicating any attempt to determine a minimum-cost solution. This research presents optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and GHG emissions. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. The last study's aim is to minimize overproduction and unmet demand under the situation where demand is unknown. It also addresses correlations across demands for certain item (e.g., hamburgers are often demanded with french fries). As in the second study, we again utilize a Hooke-Jeeves optimization method to solve this production planning problem. In order to model a more realistic representation of this problem, demand uncertainty is incorporated in this study's optimization model, using a kernel density estimation approach. We illustrate our approach in all three studies with an application to empirical data from Campus Dining Services operations at the University of Missouri

    Motion blur removal from photographs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 135-143).One of the long-standing challenges in photography is motion blur. Blur artifacts are generated from relative motion between a camera and a scene during exposure. While blur can be reduced by using a shorter exposure, this comes at an unavoidable trade-off with increased noise. Therefore, it is desirable to remove blur computationally. To remove blur, we need to (i) estimate how the image is blurred (i.e. the blur kernel or the point-spread function) and (ii) restore a natural looking image through deconvolution. Blur kernel estimation is challenging because the algorithm needs to distinguish the correct imageblur pair from incorrect ones that can also adequately explain the blurred image. Deconvolution is also difficult because the algorithm needs to restore high frequency image contents attenuated by blur. In this dissertation, we address a few aspects of these challenges. We introduce an insight that a blur kernel can be estimated by analyzing edges in a blurred photograph. Edge profiles in a blurred image encode projections of the blur kernel, from which we can recover the blur using the inverse Radon transform. This method is computationally attractive and is well suited to images with many edges. Blurred edge profiles can also serve as additional cues for existing kernel estimation algorithms. We introduce a method to integrate this information into a maximum-a-posteriori kernel estimation framework, and show its benefits. Deconvolution algorithms restore information attenuated by blur using an image prior that exploits a heavy-tailed gradient profile of natural images. We show, however, that such a sparse prior does not accurately model textures, thereby degrading texture renditions in restored images. To address this issue, we introduce a content-aware image prior that adapts its characteristics to local textures. The adapted image prior improves the quality of textures in restored 6 images. Sometimes even the content-aware image prior may be insufficient for restoring rich textures. This issue can be addressed by matching the restored image's gradient distribution to its original image's gradient distribution, which is estimated directly from the blurred image. This new image deconvolution technique called iterative distribution reweighting (IDR) improves the visual realism of reconstructed images. Subject motion can also cause blur. Removing subject motion blur is especially challenging because the blur is often spatially variant. In this dissertation, we address a restricted class of subject motion blur: the subject moves at a constant velocity locally. We design a new computational camera that improves the local motion estimation and, at the same time, reduces the image information loss due to blur.by Taeg Sang Cho.Ph.D

    Image mapping using local and global statistics

    No full text
    corecore