5 research outputs found

    Improved wolf algorithm on document images detection using optimum mean technique

    Get PDF
    Detection text from handwriting in historical documents provides high-level features for the challenging problem of handwriting recognition. Such handwriting often contains noise, faint or incomplete strokes, strokes with gaps, and competing lines when embedded in a table or form, making it unsuitable for local line following algorithms or associated binarization schemes. In this paper, a proposed method based on the optimum threshold value and namely as the Optimum Mean method was presented. Besides, Wolf method unsuccessful in order to detect the thin text in the non-uniform input image. However, the proposed method was suggested to overcome the Wolf method problem by suggesting a maximum threshold value using optimum mean. Based on the calculation, the proposed method obtained a higher F-measure (74.53), PSNR (14.77) and lowest NRM (0.11) compared to the Wolf method. In conclusion, the proposed method successful and effective to solve the wolf problem by producing a high-quality output image

    Extracting Maya Glyphs from Degraded Ancient Documents via Image Segmentation

    Get PDF
    We present a system for automatically extracting hieroglyph strokes from images of degraded ancient Maya codices. Our system adopts a region-based image segmentation framework. Multi-resolution super-pixels are first extracted to represent each image. A Support Vector Machine (SVM) classifier is used to label each super-pixel region with a probability to belong to foreground glyph strokes. Pixelwise probability maps from multiple super-pixel resolution scales are then aggregated to cope with various stroke widths and background noise. A fully connected Conditional Random Field model is then applied to improve the labeling consistency. Segmentation results show that our system preserves delicate local details of the historic Maya glyphs with various stroke widths and also reduces background noise. As an application, we conduct retrieval experiments using the extracted binary images. Experimental results show that our automatically extracted glyph strokes achieve comparable retrieval results to those obtained using glyphs manually segmented by epigraphers in our team

    Development of Machine Learning Based Binarization Technique of Hand-drawn Floor Plans for Automatic Extraction of Indoor Spatial Information

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ฑด์„คํ™˜๊ฒฝ๊ณตํ•™๋ถ€, 2022. 8. ์œ ๊ธฐ์œค.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ, ์‚ฌ๋ฌผ์ธํ„ฐ๋„ท ๋“ฑ์˜ ๋ฐœ์ „๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉ์ž์˜ ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•˜์—ฌ ์‹ค์‹œ๊ฐ„ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์‹ค๋‚ด ์œ„์น˜๊ธฐ๋ฐ˜ ์„œ๋น„์Šค์— ๋Œ€ํ•œ ์‚ฌํšŒ์  ๊ด€์‹ฌ๋„๊ฐ€ ๋†’๋‹ค. ์ด๋Ÿฌํ•œ ์‹ค๋‚ด ์œ„์น˜๊ธฐ๋ฐ˜ ์„œ๋น„์Šค์˜ ํ™œ์„ฑํ™”๋ฅผ ์œ„ํ•ด์„œ๋Š” ์‹ค๋‚ด ๊ณต๊ฐ„์˜ ๋ชจ์Šต์„ ํ‘œํ˜„ํ•˜๋Š” ์‹ค๋‚ด ๊ตฌ์กฐ ํ˜•์ƒํ™” ๋ฐ ๋ชจ๋ธ๋ง์ด ํ•„์ˆ˜์ ์ด๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ ˆ์ด์ € ์Šค์บ๋„ˆ, ๊ฑด์ถ•๋„๋ฉด ์ด๋ฏธ์ง€, CADํ”Œ๋žœ ๋“ฑ ๋‹ค์–‘ํ•œ ์›์ฒœ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์‹ค๋‚ด ๊ณต๊ฐ„์„ ์žฌํ˜„ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ์ง„ํ–‰๋˜์–ด ์™”๋‹ค. ํŠนํžˆ ์‹ค๋‚ด ๊ณต๊ฐ„์ •๋ณด๋ฅผ ์ž๋™ ์ถ”์ถœ ๊ธฐ์ˆ ์€ ์ˆ˜๋™ ๋ชจ๋ธ๋ง ๋Œ€๋น„ ๊ฒฝ์ œ์ ์œผ๋กœ ๋งค์šฐ ํšจ์œจ์ ์ด๋‹ค. ์ด์— 2์ฐจ์› ๊ฑด์ถ•๋„๋ฉด ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๋ฒฝ, ์ฐฝ๋ฌธ, ๊ณ„๋‹จ๊ณผ ๊ฐ™์€ ์‹ค๋‚ด ๊ฐ์ฒด๋ฅผ ์ž๋™ ์ถ”์ถœํ•˜์—ฌ 3D ๋ชจ๋ธ๋ง ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•ํ•˜๋Š” ๋„๋ฉด ํ•ด์„ ์—ฐ๊ตฌ๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰ ์ค‘์— ์žˆ๋‹ค. ๊ธฐ์กด์˜ 2์ฐจ์› ์‚ฌ์ง„ ๊ธฐ๋ฐ˜ ๋„๋ฉด ํ•ด์„ ์—ฐ๊ตฌ๋“ค์€ ๊ฐ์ฒด์™€ ๋ฐฐ๊ฒฝ์ด ๋ช…ํ™•ํžˆ ๊ตฌ๋ถ„๋˜๋ฉฐ ๊ฐ์ฒด๊ฐ€ ์ผ์ •ํ•œ ์ƒ‰์œผ๋กœ ํ‘œํ˜„๋œ ์ „์ž ๋„๋ฉด์„ ๋Œ€์ƒ์œผ๋กœ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ํ•˜์ง€๋งŒ, ํŽœ๊ณผ ์ž‰ํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์ž‘์„ฑ๋œ ํ•ธ๋“œ๋“œ๋กœ์ž‰ ๋„๋ฉด์˜ ๊ฒฝ์šฐ ๊ธฐ์กด ์—ฐ๊ตฌ์— ์‚ฌ์šฉ๋œ ๋„๋ฉด์— ๋น„ํ•ด ๋…ธ์ด์ฆˆ๊ฐ€ ๋งŽ๊ณ  ๋ฐฐ๊ฒฝ ํŒจํ„ด์ด ๋ถˆ๊ทœ์น™์ ์ด๋‹ค. ๋˜ํ•œ ์‚ฌ์šฉ๋œ ํŽœ์ด๋‚˜ ์ž‰ํฌ์— ๋”ฐ๋ผ ๊ฐ์ฒด์˜ ์ƒ‰์ƒ๊ฐ’์ด ์ผ์ •ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๊ธฐ์กด ์‹ค๋‚ด ๊ณต๊ฐ„ ๊ฐ์ฒด ์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜๋Š” ๋ฐ์— ํ•œ๊ณ„๊ฐ€ ์กด์žฌํ•œ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ๋…ธ์ด์ฆˆ๊ฐ€ ์‹ฌํ•˜๊ณ  ๋ถˆ๊ทœ์น™์ ์ธ ํ•ธ๋“œ๋“œ๋กœ์ž‰ ๊ฑด์ถ•๋„๋ฉด์„ ๋Œ€์ƒ์œผ๋กœ ์‹ค๋‚ด ๊ณต๊ฐ„์„ ๊ตฌ์„ฑํ•˜๋Š” ๊ฐ์ฒด์™€ ๋ฐฐ๊ฒฝ์„ ๊ตฌ๋ถ„ํ•˜๋Š” ์ด์ง„ํ™”๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ „์ž ๋„๋ฉด ๋Œ€์ƒ์˜ ๊ธฐ์กด ์‹ค๋‚ด ๊ณต๊ฐ„์ •๋ณด ์ž๋™ ์ถ”์ถœ ์—ฐ๊ตฌ์˜ ๋ฒ”์œ„๋ฅผ ์—ญ์‚ฌ์  ๊ฑด์ถ•๋ฌผ์ด๋‚˜ ๊ฑด์ถ• ์—ฐ๋„๊ฐ€ ์˜ค๋ž˜๋˜์–ด ์•„๋‚ ๋กœ๊ทธ ๋ฐฉ์‹์œผ๋กœ ์ž‘์„ฑ๋œ ๊ฑด์ถ•๋„๋ฉด๋งŒ ์กด์žฌํ•˜๋Š” ๊ฑด๋ฌผ์„ ๋Œ€์ƒ์œผ๋กœ ํ™•์žฅํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๋ถ„์„ ๋ฐ์ดํ„ฐ๋กœ์„œ 1900๋…„๋Œ€ ์ดˆ๋ฐ˜์— ์ž‘์„ฑ๋œ ์ผ์ œ์‹œ๊ธฐ ๊ฑด์ถ•๋„๋ฉด์„ ํ™œ์šฉํ•˜์—ฌ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์— ์‚ฌ์šฉ๋œ ์ผ์ œ์‹œ๊ธฐ ๊ฑด์ถ•๋„๋ฉด์€ ์ข…์ด๋ฅ˜ ๋ฌธํ™”์žฌ ํŠน์„ฑ์ƒ ๋ณด๊ด€ ๋ฐ ๋””์ง€ํ„ธํ™” ๊ณผ์ •์—์„œ ๋‹ค์–‘ํ•œ ํ˜•ํƒœ์˜ ๋…ธ์ด์ฆˆ๊ฐ€ ์กด์žฌํ•˜๋ฉฐ ์ž‘์„ฑ ์‹œ ์‚ฌ์šฉ๋œ ํ•„๊ธฐ๋ฅ˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ๊ฐ์ฒด์˜ ์ƒ‰์ƒ ๊ฐ’์ด ์ผ์ •ํ•˜์ง€ ๋ชปํ•˜๋‹ค. ๋˜ํ•œ ํ•ธ๋“œ๋“œ๋กœ์ž‰ ๊ฑด์ถ•๋„๋ฉด ์ด๋ฏธ์ง€๋งˆ๋‹ค ๋‚˜ํƒ€๋‚˜๋Š” ๋…ธ์ด์ฆˆ์˜ ํ”ฝ์…€๊ฐ’๊ณผ ์‹ค๋‚ด ๊ฐ์ฒด์˜ ์„ ๋ช…๋„๊ฐ€ ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•œ ํ•™์Šต ๊ธฐ๋ฐ˜ ์ด์ง„ํ™” ๊ธฐ๋ฒ•์„ ์ ์šฉํ•˜์˜€๋‹ค. ์ด์ง„ํ™”๋Š” ์ œ๊ฑฐํ•˜๊ณ ์ž ํ•˜๋Š” ๋…ธ์ด์ฆˆ์˜ ํ˜•ํƒœ์— ๋”ฐ๋ผ ํฌ๊ฒŒ ๋‘ ๊ฐ€์ง€ ๋‹จ๊ณ„๋กœ ์ง„ํ–‰๋œ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋„๋ฉด ์ด๋ฏธ์ง€์˜ ๋ฐฐ๊ฒฝ์— ์ „์ฒด์ ์œผ๋กœ ๋„“๊ฒŒ ๋ถ„ํฌํ•˜๋Š” ๋…ธ์ด์ฆˆ๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๋Š” ๋‹จ๊ณ„์ด๋‹ค. ๋‘ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ์ฒด์™€ ๋ฐฐ๊ฒฝ์„ ๊ตฌ๋ถ„ํ•˜๋Š” ํŠน์ง•์„ ์ถ”์ถœํ•˜์—ฌ ๋ฉด์ ์ด ์ž‘๊ณ  ๋‹ค์–‘ํ•œ ํ˜•ํƒœ์˜ ๋…ธ์ด์ฆˆ๋ฅผ ํ•™์Šต ๋ฐ ์ œ๊ฑฐ์‹œํ‚ค๋Š” ๋‹จ๊ณ„์ด๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•๋ก ์— ๋Œ€ํ•œ ๊ฒ€์ฆ์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ํ•™์Šต ๊ณผ์ •์— ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ํ…Œ์ŠคํŠธ ์…‹์— ๋Œ€ํ•œ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์„ฑ๋Šฅ ํ‰๊ฐ€์™€ ์ตœ์ข… ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ํ’ˆ์งˆ ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ–ˆ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์„ฑ๋Šฅ ํ‰๊ฐ€์˜ ๊ฒฝ์šฐ ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ ๋ชจ๋ธ์˜ ํ‰๊ท  ์ •๋ฐ€๋„ ๋ฐ ์žฌํ˜„์œจ์€ ๊ฐ๊ฐ 0.985์™€ 0.99์ด๊ณ  ์ตœ์ข… ์ด์ง„ํ™” ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€์˜ ์‹ ํ˜ธ ๋Œ€๋น„ ์žก์Œ ๋น„ ์ง€ํ‘œ๋Š” 16.543์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ๋‹ค. ์ด์ง„ํ™” ๊ฒฐ๊ณผ, ์„ ํ–‰ ์—ฐ๊ตฌ ๋Œ€๋น„ ๋‹ค์–‘ํ•œ ๋‘๊ป˜๋กœ ๊ตฌ์„ฑ๋œ ๋ฒฝ, ์ฐฝ๋ฌธ, ๊ฐ€๋ฒฝ๊ณผ ๊ฐ™์€ ์‹ค๋‚ด ๊ณต๊ฐ„ ๊ฐ์ฒด์™€ ๋ฐฐ๊ฒฝ์„ ์„ฑ๊ณต์ ์œผ๋กœ ๋ถ„๋ฆฌํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋ชจ๋ธ์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ ๊ฒ€์ฆ์„ ์œ„ํ•ด ๋ฒ ๋ฅด์‚ฌ์œ  ๊ถ์ „ ๊ฑด์ถ•๋„๋ฉด์— ๋Œ€ํ•ด ๋ณธ ์—ฐ๊ตฌ์˜ ์ด์ง„ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์˜€๋‹ค. ์ ์šฉ ๊ฒฐ๊ณผ, ์ •๋ฐ€๋„ ๋ฐ ์žฌํ˜„์œจ์€ ๊ฐ๊ฐ 0.998์™€ 0.969์ด๊ณ  ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€์˜ ํ’ˆ์งˆ์„ ํ‰๊ฐ€ํ•˜๋Š” ์ง€ํ‘œ ์—ญ์‹œ ํ…Œ์ŠคํŠธ ์…‹๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ธฐ์กด ๋„๋ฉด ํ•ด์„ ์—ฐ๊ตฌ์˜ ํ™œ์šฉ์ฒ˜๋ฅผ ํ•ธ๋“œ๋“œ๋กœ์ž‰ ๊ฑด์ถ•๋„๋ฉด์œผ๋กœ ํ™•์žฅํ•˜๋Š” ๊ธฐ๋ฐ˜์„ ๋งˆ๋ จํ–ˆ๋‹ค๋Š” ์ ์—์„œ ์˜์˜๊ฐ€ ์žˆ๋‹ค.Along with the recent development of artificial intelligence and the Internet of Things, social interest in indoor location-based services providing real-time information from user location is getting high. For location-based service development, indoor spatial modelling is essential to represent indoor topology. Therefore, many studies have been conducted to extract indoor structure information from various types of data such as laser scanners, architectural drawing images, and CAD plans. In particular, the automatic extraction technology of indoor space information is economically efficient compared to manual modeling, so algorithms for automatic extraction of floor plan entities like walls, windows, and stairs from 2D floor plan image are actively developed. Previous studies mostly used โ€œcleanโ€ floor images that floor plan entities and background are clearly distinguished. However, in the case of hand-drawing architectural floor plans created using various types of pens and ink, there are large numbers of noise in background. In addition, since the pixel intensities of every floor plan entities are not constant depending on the pen or ink used, there is a limit to applying the previous algorithms. Therefore, this study aims to perform binarization to distinguish floor plan entities from background with noise and irregular patterns. The purpose of this study is to expand the scope of previous floor plan analysis studies to historical and old buildings. For dataset, we use architectural drawings of the Japanese colonial period written in the early 1900s. The Japanese architectural drawings used in this study have various types of noise made during the process of storage and digitization. Also, floor plan entities consist of all different colors depending on the type of materials used. We apply learning-based binarizaiton algorithm and our algorithm can be divided into two main steps. The first step is to reduce the noise that is widely distributed across the background of the drawing image using a Gaussian mixture model. The second step is to extract features that distinguish objects and backgrounds based on the random forest model, and to learn various forms of small noise. For evaluation, we perform the classification performance of suggested algorithm on test set. Our binarization algorithm results in 98.5% precision and 99.0% F1-score rate. This study has two main contributions. First, our algorithm successfully distinguishes various types of floor plan entities with different thickness. Second, study scope of automatic extraction of spatial information from floor plan image can be expanded from electronic floor plan image to hand-drawing architectural floor plans.1. ์„œ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ ๋ฐ ๋ชฉ์  1 1.2 ์ด์ง„ํ™” ์—ฐ๊ตฌ ๋™ํ–ฅ 4 1.2.1 ๊ทœ์น™ ๊ธฐ๋ฐ˜ ์ด์ง„ํ™” ๋ฐฉ๋ฒ•๋ก  7 1.2.2 ํ•™์Šต ๊ธฐ๋ฐ˜ ์ด์ง„ํ™” ๋ฐฉ๋ฒ•๋ก  10 1.2.3 ์‹œ์‚ฌ์  ๋ฐ ๊ฒฐ๋ก  12 1.3 ์—ฐ๊ตฌ ๋ฒ”์œ„ ๋ฐ ๋ฐฉ๋ฒ• 14 2. ์—ฐ๊ตฌ ๋ฐฉ๋ฒ• 17 2.1 ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ๋ฐ ์ „์ฒ˜๋ฆฌ 17 2.2 ๋ฐฐ๊ฒฝ ์˜ˆ์ธก ๋ฐ ์ œ๊ฑฐ 19 2.2.1 ํ”ฝ์…€๊ฐ’ ๋นˆ๋„ ๋ถ„์„ 19 2.2.2 ์ด์ƒ๊ฐ’ ํ•„ํ„ฐ๋ง 21 2.2.3 ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ 24 2.2.4 ๋ฐฐ๊ฒฝ ์ œ๊ฑฐ ์ด๋ฏธ์ง€ ์ƒ์„ฑ 26 2.3 ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋„๋ฉด ์ด์ง„ํ™” 27 2.3.1 ํŠน์ง• ์ถ”์ถœ 27 2.3.1.1 ํ†ต๊ณ„์  ํŠน์„ฑ 30 2.3.1.2 ๋ช…์•”๋„ ๋™์‹œํ–‰๋ ฌ์˜ ํ†ต๊ณ„์  ํŠน์„ฑ 31 2.3.1.3 ์ˆ˜์ง-์ˆ˜ํ‰ ์—ฐ์†์„ฑ ํ–‰๋ ฌ 35 2.3.2 ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ ๋ชจ๋ธ 38 2.3.3 ์žฌ๊ท€์  ํŠน์ง• ์ œ๊ฑฐ๋ฒ• 42 2.3.4 ํ‰๊ฐ€์ง€ํ‘œ 44 2.4 ํ›„์ฒ˜๋ฆฌ 47 3. ์‹คํ—˜ ์ ์šฉ ๋ฐ ๊ฒฐ๊ณผ 49 3.1 ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ๋ฐ ์ „์ฒ˜๋ฆฌ ๊ฒฐ๊ณผ 49 3.2 ๋ฐฐ๊ฒฝ ์˜ˆ์ธก ๋ฐ ์ œ๊ฑฐ ๊ฒฐ๊ณผ 52 3.3 ํŠน์ง• ์ถ”์ถœ ๊ฒฐ๊ณผ 56 3.3.1 ๋ช…์•”๋„ ๋™์‹œ๋ฐœ์ƒ ํ–‰๋ ฌ ํŠน์ง• ์ถ”์ถœ ๊ฒฐ๊ณผ 56 3.3.2 ์ˆ˜์ง-์ˆ˜ํ‰ ์—ฐ์†์„ฑ ํ–‰๋ ฌ ํŠน์ง• ์ถ”์ถœ ๊ฒฐ๊ณผ 58 3.4 ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋„๋ฉด ์ด์ง„ํ™” ํ‰๊ฐ€ ๊ฒฐ๊ณผ 59 3.4.1 ํŠน์ง• ์ค‘์š”๋„ ๋ฐ ์ตœ์  ํŠน์ง• ์กฐํ•ฉ 59 3.4.2 ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์„ฑ๋Šฅ ๋น„๊ต 63 3.4.3 ์ด์ง„ํ™” ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€์˜ ํ’ˆ์งˆ ๋น„๊ต 65 3.5 ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋„๋ฉด ์ด์ง„ํ™” ์ ์šฉ ๊ฒฐ๊ณผ 68 3.5.1 ์†Œ์ถ•์ฒ™ ๋„๋ฉด์—์„œ์˜ ์ด์ง„ํ™” ์ ์šฉ ๊ฒฐ๊ณผ 70 3.5.2 ๋Œ€์ถ•์ฒ™ ๋„๋ฉด์—์„œ์˜ ์ด์ง„ํ™” ์ ์šฉ ๊ฒฐ๊ณผ 72 3.6 ๋‹ค์–‘ํ•œ ํ•ธ๋“œ๋“œ๋กœ์ž‰ ๊ฑด์ถ•๋„๋ฉด์˜ ์ด์ง„ํ™” ํ‰๊ฐ€ ๋ฐ ์ ์šฉ ๊ฒฐ๊ณผ 74 3.6.1 ์ง์„  ๊ฐ์ฒด๋กœ ๊ตฌ์„ฑ๋œ ๋ฒ ๋ฅด์‚ฌ์œ  ๊ถ์ „ ๊ฑด์ถ•๋„๋ฉด์˜ ์ด์ง„ํ™” 75 3.6.2 ๊ณก์„  ๊ฐ์ฒด๋ฅผ ํฌํ•จํ•˜๋Š” ๋ฒ ๋ฅด์‚ฌ์œ  ๊ถ์ „ ๊ฑด์ถ•๋„๋ฉด์˜ ์ด์ง„ํ™” 76 4. ๊ฒฐ๋ก  79 ์ฐธ ๊ณ  ๋ฌธ ํ—Œ 82 ๋ถ€ ๋ก 86 Abstract 112์„

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore