A network for multiscale image segmentation


Detecting edges of objects in their images is a basic problem in computational vision. The scale-space technique introduced by Witkin [11] provides means of using local and global reasoning in locating edges. This approach has a major drawback: it is difficult to obtain accurately the locations of the 'semantically meaningful' edges. We have refined the definition of scale-space, and introduced a class of algorithms for implementing it based on using anisotropic diffusion [9]. The algorithms involves simple, local operations replicated over the image making parallel hardware implementation feasible. In this paper we present the major ideas behind the use of scale space, and anisotropic diffusion for edge detection, we show that anisotropic diffusion can enhance edges, we suggest a network implementation of anisotropic diffusion, and provide design criteria for obtaining networks performing scale space, and edge detection. The results of a software implementation are shown

    Similar works