Skip to main content
Article thumbnail
Location of Repository

Generating sub-resolution detail in images and volumes using constrained texture synthesis

By Lujin Wang and Klaus Mueller


A common deficiency of discretized datasets is that detail beyond the resolution of the dataset has been irrecoverably lost. This lack of detail becomes immediately apparent once one attempts to zoom into the dataset and only recovers blur. Here, we describe a method that generates the missing detail from any available and plausible high-resolution data, using texture synthesis. Since the detail generation process is guided by the underlying image or volume data and is designed to fill in plausible detail in accordance with the coarse structure and properties of the zoomed-in neighborhood, we refer to our method as constrained texture synthesis. Regular zooms become “semantic zooms”, where each level of detail stems from a data source attuned to that resolution. We demonstrate our approach by a medical application – the visualization of a human liver – but its principles readily apply to any scenario, as long as data at all resolutions are available. We will first present a 2D viewing application, called the “virtual microscope”, and then extend our technique to 3D volumetric viewing

Topics: CR Categories, I.3.7 [Computer Graphics, Color, shading, shadowing and texture I.3.3 [Computer Graphics, Picture/Image Generation Keywords, texture synthesis, semantic zoom
Year: 2004
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.