A common deficiency of discretized datasets is that detail beyond the resolution of the dataset has been irrecoverably lost. This lack of detail becomes immediately apparent once one attempts to zoom into the dataset and only recovers blur. Here, we describe a method that generates the missing detail from any available and plausible high-resolution data, using texture synthesis. Since the detail generation process is guided by the underlying image or volume data and is designed to fill in plausible detail in accordance with the coarse structure and properties of the zoomed-in neighborhood, we refer to our method as constrained texture synthesis. Regular zooms become “semantic zooms”, where each level of detail stems from a data source attuned to that resolution. We demonstrate our approach by a medical application – the visualization of a human liver – but its principles readily apply to any scenario, as long as data at all resolutions are available. We will first present a 2D viewing application, called the “virtual microscope”, and then extend our technique to 3D volumetric viewing
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.