7 research outputs found

    Fully-automated, CT-only GTV contouring for palliative head and neck radiotherapy.

    Get PDF
    Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7ย mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (pโ€‰โ‰คโ€‰0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy

    ์ข…์–‘์˜ 3์ฐจ์›์  ์œ„์น˜ ํŒŒ์•…์„ ์œ„ํ•œ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ์ˆ  ๊ฐœ๋ฐœ ๋ฐ ์ž„์ƒ์  ์œ ์šฉ์„ฑ ํ‰๊ฐ€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜ํ•™๊ณผ, 2022.2. ๊น€ํฌ์ฐฌ.๋ฐฐ ๊ฒฝ: 3D ํ”„๋ฆฐํŒ…์€ ์ธ์ฒด ์ข…์–‘์˜ 3์ฐจ์›์  ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ์จ, ์ด๋ฏธ ์˜ํ•™ ์˜์—ญ์— ๋‹ค์–‘ํ•œ ๋ชฉ์ ์œผ๋กœ ๋ณด๊ธ‰๋˜์–ด ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์œผ๋‚˜, ๋†’์€ ๋น„์šฉ๊ณผ ๊ธด ์ œ์ž‘ ์‹œ๊ฐ„์€ 3D ๋ชจ๋ธ์˜ ํ™œ์šฉ์— ์ œ์•ฝ์ด ๋˜๊ณ  ์žˆ๋‹ค. ๋ชฉ ์ : ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ๋Š” ์ธ์ฒด ์žฅ๊ธฐ์™€ ๊ทธ ์žฅ๊ธฐ๊ฐ€ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ์ข…์–‘์˜ ๊ด€๊ณ„๋ฅผ ๋ฌ˜์‚ฌํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง์ด๋ผ๊ณ  ํ•˜๋Š” ์ƒˆ๋กœ์šด ๊ธฐ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์œผ๋กœ, ๋น„์šฉ์˜ ์ ˆ๊ฐ ๋ฐ ์ถœ๋ ฅ ์‹œ๊ฐ„์—์„œ ์žฅ์ ์„ ๋ณด์ผ ๊ฒƒ์œผ๋กœ ๊ฐ€์ •ํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ๊ฐœ๋ณ„ํ™”๋œ 3D ๊ฐ‘์ƒ์„  ๋ชจ๋ธ์„ ๋ฉ”์‰ฌ ๊ตฌ์กฐ๋กœ ์ œ์ž‘ํ•˜๊ณ  ์ˆ˜์ˆ  ์ „ ๋™์˜๋ฅผ ๋ฐ›๋Š” ๊ณผ์ •์— ์ด์šฉํ•˜์—ฌ ๋ณธ ๊ธฐ์ˆ ์˜ ์ž„์ƒ์  ์œ ์šฉ์„ฑ์„ ํ‰๊ฐ€ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋ฐฉ ๋ฒ•: ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3์ฐจ์› ๋ชจ๋ธ๋ง์€ ๋‹จ์ธต ์˜์ƒ์—์„œ ์ผ์ • ๊ฐ„๊ฒฉ์œผ๋กœ ์ขŒํ‘œ๋ฅผ ์ถ”์ถœํ•˜๊ณ , ์ด๋ฅผ ๊ทธ๋ฌผ๋ง(๋ฉ”์‰ฌ) ํ˜•ํƒœ๋กœ ์—ฐ๊ฒฐํ•˜๋Š” ๊ตฌ์กฐ์˜ ๋ ˆํ”Œ๋ฆฌ์นด๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ธ์ ‘ํ•œ ํ•ด๋ถ€ํ•™์  ๊ตฌ์กฐ๋Š” ๋ฉ”์‰ฌ์˜ ๋ฐ€๋„๋ฅผ ๋ณ€ํ™”์‹œ์ผœ ์ถœ๋ ฅํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ •์ƒ ์กฐ์ง๊ณผ ์ข…์–‘์€ ๋Œ€์กฐ๋˜๋Š” ์ƒ‰์ƒ์œผ๋กœ ํ‘œ์‹œํ•œ๋‹ค. ์ด๋ฅผ ์ด์šฉํ•œ ์ž„์ƒ ์—ฐ๊ตฌ๋ฅผ ์œ„ํ•ด, ์ˆ˜์ˆ  ์ „ ๋™์˜์„œ ์ž‘์„ฑ์— 3D ๋ชจํ˜•์„ ์ด์šฉํ•˜๋Š” ์ „ํ–ฅ์  ๋ฌด์ž‘์œ„ ๋ฐฐ์ • ๋Œ€์กฐ๊ตฐ ๋น„๊ต ์ž„์ƒ์‹œํ—˜(KCT0005069)์„ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๊ฐ‘์ƒ์„  ์ˆ˜์ˆ ์„ ๋ฐ›๋Š” ํ™˜์ž 53๋ช…์„ ๋Œ€์ƒ์œผ๋กœ, ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ ์‹œ ๊ฐœ๋ณ„ํ™”๋œ 3D ํ”„๋ฆฐํŒ… ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ตฐ๊ณผ, ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๊ธฐ์กด์˜ ๋ฐฉ์‹๋Œ€๋กœ ๋™์˜์„œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋‘ ๊ทธ๋ฃน์œผ๋กœ ๋‚˜๋ˆ„์—ˆ๋‹ค. ์ด ๊ณผ์ •์—์„œ U-Net ๊ธฐ๋ฐ˜์˜ ๋”ฅ๋Ÿฌ๋‹ ์•„ํ‚คํ…์ฒ˜์™€ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•˜์—ฌ ๊ฐœ์ธํ™”๋œ 3D ๋ชจ๋ธ์„ ์ œ์ž‘ํ•˜์˜€๋‹ค. ๊ฒฐ ๊ณผ: ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง์„ ์ด์šฉํ•ด ์œตํ•ฉ ์ ์ธต ๋ชจ๋ธ๋ง(FDM) ๋ฐฉ์‹์˜ 3D ํ”„๋ฆฐํ„ฐ๋ฅผ ํ†ตํ•ด ์ถœ๋ ฅํ•œ ๊ฒฐ๊ณผ, ๋‚ฎ์€ ๋น„์šฉ(0.05/cm3)๊ณผ์ œ์ž‘์‹œ๊ฐ„(1.73min/cm3)์„๋ณด์˜€๋‹ค.์‹ค์ œ3Dํ”„๋ฆฐํŒ…๋œ๋ชจํ˜•์€์ˆ˜์ˆ ์ค‘์ ˆ์ œ๋œ๊ฒ€์ฒด์™€๋น„๊ตํ–ˆ์„๋•Œ์žฅ๊ธฐโˆ’์ข…์–‘ํ•ด๋ถ€ํ•™๋ฐ์ธ์ ‘์กฐ์ง์„์‹œ๊ฐ์ ์œผ๋กœ๊ตฌ๋ถ„ํ•˜๋Š”๋ฐ์ถฉ๋ถ„ํ•œ์ˆ˜์ค€์„๋ณด์˜€๋‹ค.์ดํ›„์‹œํ–‰ํ•œ์ „ํ–ฅ์ ์ž„์ƒ์—ฐ๊ตฌ์—์„œ,53๋ช…์˜ํ™˜์ž๋“ค์˜๊ฐ‘์ƒ์„ ๋ชจํ˜•์˜ํ‰๊ท 3Dํ”„๋ฆฐํŒ…์‹œ๊ฐ„์€258.9๋ถ„์ด์—ˆ๊ณ ํ‰๊ท ์ œ์ž‘๋น„๋Š”ํ™˜์ž1์ธ๋‹นUSD4.23์˜€๋‹ค.๋ชจ๋“ 3์ฐจ์›๋ชจํ˜•์€์ข…์–‘๊ณผ๊ฐ‘์ƒ์„ ์˜ํฌ๊ธฐ,์œ„์น˜,ํ•ด๋ถ€ํ•™์ ๊ด€๊ณ„๋ฅผํšจ๊ณผ์ ์œผ๋กœ๋ฐ˜์˜ํ• ์ˆ˜์žˆ์—ˆ๋‹ค.์ˆ˜์ˆ ๋™์˜์„œ์ž‘์„ฑ์‹œ๊ฐœ๋ณ„ํ™”๋œ3Dํ”„๋ฆฐํŒ…๋ชจ๋ธ์„์ œ๊ณต๋ฐ›์€๊ทธ๋ฃน์€4๊ฐ€์ง€๋ฒ”์ฃผ(์ผ๋ฐ˜์ง€์‹,์ˆ˜์ˆ ์˜์ด์ ,์ˆ˜์ˆ ์˜์œ„ํ—˜,๋งŒ์กฑ๋„)๋ชจ๋‘์—์„œํ†ต๊ณ„์ ์œผ๋กœ์œ ์˜ํ•œ์ˆ˜์ค€์˜๊ฐœ์„ ์„๋ณด์˜€๋‹ค(๋ชจ๋‘p<0.05).๋ชจ๋“ ํ™˜์ž๋Š”์ˆ˜์ˆ ํ›„๊ฐœ๋ณ„ํ™”๋œ3D๋ชจ๋ธ์„์ œ๊ณต๋ฐ›์•˜์œผ๋ฉฐ,์งˆ๋ณ‘,์ˆ˜์ˆ ๋ฐ๊ฐ€๋Šฅํ•œํ•ฉ๋ณ‘์ฆ๋ฐ์ „๋ฐ˜์ ์ธ๋งŒ์กฑ๋„ํ–ฅ์ƒ์—๋„์›€์ด๋˜์—ˆ์Œ์„ํ™•์ธํ• ์ˆ˜์žˆ์—ˆ๋‹ค.๊ฒฐ๋ก :๊ฐœ๋ณ„ํ™”๋œ3D๊ฐ‘์ƒ์„ ๋ชจ๋ธ์€์ˆ˜์ˆ ์ „๋™์˜์„œ์ž‘์„ฑ๊ณผ์ •์—์„œํ™˜์ž์˜์ดํ•ด์™€๋งŒ์กฑ๋„๋ฅผํ–ฅ์ƒ์‹œํ‚ค๋Š”ํšจ๊ณผ์ ์ธ๋„๊ตฌ๊ฐ€๋ ์ˆ˜์žˆ์—ˆ๋‹ค.์ƒˆ๋กญ๊ฒŒ๊ณ ์•ˆํ•œ๋ฉ”์‰ฌ๊ตฌ์กฐ์˜3D๋ชจ๋ธ๋ง๊ธฐ๋ฒ•์€์žฅ๊ธฐ์˜ํฌ๊ธฐ/์œค๊ณฝ๋ฐ์ข…์–‘์˜์œ„์น˜๋ฅผ์‹œ๊ฐํ™”ํ•˜๋Š”๋ฐํšจ๊ณผ์ ์ด์—ˆ์œผ๋ฉฐ,์ด๋Ÿฌํ•œ๋ฐฉ๋ฒ•๋ก ์€๊ฐœ๋ณ„ํ™”๋œ์น˜๋ฃŒ๋ฅผ์œ„ํ•œํ•ด๋ถ€ํ•™์ ๋ชจ๋ธ๋ง์„์šฉ์ดํ•˜๊ฒŒํ•˜๊ณ ,์ˆ˜์ˆ ๋™์˜์„œ์ž‘์„ฑ๊ณผ๊ฐ™์€์„ค๋ช…๊ณผ์ •์—์žˆ์–ด,ํ™˜์ž์˜ํšจ๊ณผ์ ์ธ์˜ํ•™์ ์ง€์‹์Šต๋“์„๋„์šธ์ˆ˜์žˆ์Œ์„ํ™•์ธํ•˜์˜€๋‹ค.Background:Asamethodofthreeโˆ’dimensional(3D)localizationoftumor,3Dprintingisintroducedtomedicine.However,thehighcostsandlengthyproductiontimesrequiredhavelimitedtheirapplication.Objectives:Thegoalofthefirststudywastodevelopanewandlesscostly3Dmodelingmethod,โ€œmeshโˆ’type3Dmodelingโ€,todepictorganโ€“tumorrelations.Thesecondstudywasdesignedtoevaluatetheclinicalusefulnessofapersonalizedmeshโˆ’type3Dโˆ’printedthyroidglandmodelforobtaininginformedconsent.Methods:Forthemeshโˆ’type3Dmodeling,coordinateswereextractedataspecifieddistanceintervalfromtomographicimages,connectingthemtocreatemeshโˆ’workreplicas.Adjacentconstructsweredepictedbydensityvariations,showinganatomicaltargets(i.e.,tumors)incontrastingcolors.Arandomized,controlledprospectiveclinicaltrial(KCT0005069)wasdesigned.Atotalof53patientsundergoingthyroidsurgerywererandomlyassignedtotwogroups:withorwithouta3Dโˆ’printedmodeloftheirthyroidlesionuponobtaininginformedconsent.AUโˆ’Netโˆ’baseddeeplearningarchitectureandthemeshโˆ’type3Dmodelingtechniquewereusedtofabricatethepersonalized3Dmodel.Results:Toestablishthemeshโˆ’type3Dmodelingtechnique,anarrayoforganโˆ’solidtumormodelswasprintedviaaFusedDepositionModeling3Dprinteratalowercost(0.05/cm3)๊ณผ ์ œ์ž‘ ์‹œ๊ฐ„(1.73 min/cm3)์„ ๋ณด์˜€๋‹ค. ์‹ค์ œ 3D ํ”„๋ฆฐํŒ… ๋œ ๋ชจํ˜•์€ ์ˆ˜์ˆ  ์ค‘ ์ ˆ์ œ๋œ ๊ฒ€์ฒด์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ์žฅ๊ธฐ-์ข…์–‘ ํ•ด๋ถ€ํ•™ ๋ฐ ์ธ์ ‘ ์กฐ์ง์„ ์‹œ๊ฐ์ ์œผ๋กœ ๊ตฌ๋ถ„ํ•˜๋Š”๋ฐ ์ถฉ๋ถ„ํ•œ ์ˆ˜์ค€์„ ๋ณด์˜€๋‹ค. ์ดํ›„ ์‹œํ–‰ํ•œ ์ „ํ–ฅ์  ์ž„์ƒ ์—ฐ๊ตฌ์—์„œ, 53๋ช…์˜ ํ™˜์ž๋“ค์˜ ๊ฐ‘์ƒ์„  ๋ชจํ˜•์˜ ํ‰๊ท  3D ํ”„๋ฆฐํŒ… ์‹œ๊ฐ„์€ 258.9๋ถ„์ด์—ˆ๊ณ  ํ‰๊ท  ์ œ์ž‘๋น„๋Š” ํ™˜์ž 1์ธ๋‹น USD 4.23์˜€๋‹ค. ๋ชจ๋“  3์ฐจ์› ๋ชจํ˜•์€ ์ข…์–‘๊ณผ ๊ฐ‘์ƒ์„ ์˜ ํฌ๊ธฐ, ์œ„์น˜, ํ•ด๋ถ€ํ•™์  ๊ด€๊ณ„๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋ฐ˜์˜ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ์‹œ ๊ฐœ๋ณ„ํ™”๋œ 3D ํ”„๋ฆฐํŒ… ๋ชจ๋ธ์„ ์ œ๊ณต๋ฐ›์€ ๊ทธ๋ฃน์€ 4๊ฐ€์ง€ ๋ฒ”์ฃผ(์ผ๋ฐ˜ ์ง€์‹, ์ˆ˜์ˆ ์˜ ์ด์ , ์ˆ˜์ˆ ์˜ ์œ„ํ—˜, ๋งŒ์กฑ๋„) ๋ชจ๋‘์—์„œ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•œ ์ˆ˜์ค€์˜ ๊ฐœ์„ ์„ ๋ณด์˜€๋‹ค (๋ชจ๋‘ p <0.05). ๋ชจ๋“  ํ™˜์ž๋Š” ์ˆ˜์ˆ  ํ›„ ๊ฐœ๋ณ„ํ™”๋œ 3D ๋ชจ๋ธ์„ ์ œ๊ณต๋ฐ›์•˜์œผ๋ฉฐ, ์งˆ๋ณ‘, ์ˆ˜์ˆ  ๋ฐ ๊ฐ€๋Šฅํ•œ ํ•ฉ๋ณ‘์ฆ ๋ฐ ์ „๋ฐ˜์ ์ธ ๋งŒ์กฑ๋„ ํ–ฅ์ƒ์— ๋„์›€์ด ๋˜์—ˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ฒฐ ๋ก : ๊ฐœ๋ณ„ํ™”๋œ 3D ๊ฐ‘์ƒ์„  ๋ชจ๋ธ์€ ์ˆ˜์ˆ  ์ „ ๋™์˜์„œ ์ž‘์„ฑ ๊ณผ์ •์—์„œ ํ™˜์ž์˜ ์ดํ•ด์™€ ๋งŒ์กฑ๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ํšจ๊ณผ์ ์ธ ๋„๊ตฌ๊ฐ€ ๋  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ƒˆ๋กญ๊ฒŒ ๊ณ ์•ˆํ•œ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•์€ ์žฅ๊ธฐ์˜ ํฌ๊ธฐ/์œค๊ณฝ ๋ฐ ์ข…์–‘์˜ ์œ„์น˜๋ฅผ ์‹œ๊ฐํ™” ํ•˜๋Š”๋ฐ ํšจ๊ณผ์ ์ด์—ˆ์œผ๋ฉฐ, ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋ก ์€ ๊ฐœ๋ณ„ํ™”๋œ ์น˜๋ฃŒ๋ฅผ ์œ„ํ•œ ํ•ด๋ถ€ํ•™์  ๋ชจ๋ธ๋ง์„ ์šฉ์ดํ•˜๊ฒŒ ํ•˜๊ณ , ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์„ค๋ช… ๊ณผ์ •์— ์žˆ์–ด, ํ™˜์ž์˜ ํšจ๊ณผ์ ์ธ ์˜ํ•™์  ์ง€์‹ ์Šต๋“์„ ๋„์šธ ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Background: As a method of three-dimensional (3D) localization of tumor, 3D printing is introduced to medicine. However, the high costs and lengthy production times required have limited their application. Objectives: The goal of the first study was to develop a new and less costly 3D modeling method, โ€œmesh-type 3D modelingโ€, to depict organโ€“tumor relations. The second study was designed to evaluate the clinical usefulness of a personalized mesh-type 3D-printed thyroid gland model for obtaining informed consent. Methods: For the mesh-type 3D modeling, coordinates were extracted at a specified distance interval from tomographic images, connecting them to create mesh-work replicas. Adjacent constructs were depicted by density variations, showing anatomical targets (i.e., tumors) in contrasting colors. A randomized, controlled prospective clinical trial (KCT0005069) was designed. A total of 53 patients undergoing thyroid surgery were randomly assigned to two groups: with or without a 3D-printed model of their thyroid lesion upon obtaining informed consent. A U-Net-based deep learning architecture and the mesh-type 3D modeling technique were used to fabricate the personalized 3D model. Results: To establish the mesh-type 3D modeling technique, an array of organ-solid tumor models was printed via a Fused Deposition Modeling 3D printer at a lower cost (0.05 USD/cm3) and time expenditure (1.73 min/cm3). Printed models helped promote visual appreciation of organ-tumor anatomy and adjacent tissues. In the prospective clinical study, the mean 3D printing time was 258.9 min, and the mean price for production was USD 4.23 for each patient. The size, location, and anatomical relationship of the tumor with respect to the thyroid gland could be effectively presented. The group provided with personalized 3D-printed models significantly improved across all four categories (i.e., general knowledge, benefits of surgery, risks of surgery, and satisfaction; all p < .05). All patients received a personalized 3D model after surgery and found it helpful toward understanding the disease, operation, and possible complications, as well as enhancing their overall satisfaction. Conclusion: The personalized 3D-printed thyroid gland model may be an effective tool for improving a patientโ€™s understanding and satisfaction during the informed consent process. Furthermore, the mesh-type 3D modeling reproduced glandular size/contour and tumor location, readily approximating the surgical specimen. This newly devised mesh-type 3D printing method may facilitate anatomical modeling for personalized care and improve patient awareness during informed surgical consent.Chapter 1. Introduction 1 Chapter 2. Materials and Methods 4 Chapter 3. Results 24 Chapter 4. Discussion 43 Chapter 5. Conclusions 49 Acknowledgements 50 Bibliography 51 Abstract in Korean 55๋ฐ•

    Organ at Risk Segmentation in Head and Neck CT Images Using a Two-Stage Segmentation Framework Based on 3D U-Net

    No full text

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Automated Analysis of Drill-Core Images Using Convolutional Neural Network

    Full text link
    Drill cores provide geological and geotechnical information essential for mineral and hydrocarbon exploration. Modern core scanners can automatically produce a large number of high-resolution core-tray images or unwrapped-core images, which encode important rock properties, such as lithology and geological structures. Current core-image analysis methods, however, are based on outdated algorithms that lack generalization and robustness. In addition, current methods focus on using log data while core images often provide more reliable information about the subsurface formations. With the new era of technology and the evolution of big data and artificial intelligence, core images will be an important asset for subsurface characterization. The future of core analysis, driven by the digital archiving of cores, needs to be considered since the manual core description and its extensive time and labor requirements are outdated. This dissertation aims to lay the foundation of a โ€˜Digital Geologistโ€™ using advanced machine learning algorithms. It develops and evaluates intelligent workflows using Convolutional Neural Networks (CNNs) to automate core-image analysis, and thus facilitate the evaluation of natural resources. It explores the feasibility of extracting different rock features from core images. First, advanced CNNs are utilized to predict major lithologies of rocks from core-tray images and an overall workflow is optimized for lithology prediction. Second, a CNN is created to assess the physical condition of cores and determine intact core sections to calculate the rock quality designation (RQD) index, which is essential in many geotechnical applications. Third, an innovative approach is developed to extract fractures from unwrapped-core images and determine fracture depth and orientation. The workflow is based on using a state-of-the-art CNN model for instance segmentation, the Mask Region-based Convolutional Neural Network (Mask R-CNN). Lastly, fracture analysis from unwrapped-core images is further studied to obtain more detailed characteristics represented by fracture apertures. Overall, the thesis proposes a transformed workflow of core-image analysis that can be a platform for future studies with potential application in the mining and petroleum industries

    MEDICAL MACHINE INTELLIGENCE: DATA-EFFICIENCY AND KNOWLEDGE-AWARENESS

    Get PDF
    Traditional clinician diagnosis requires massive manual labor from experienced doctors, which is time-consuming and costly. Computer-aided systems are therefore proposed to reduce doctorsโ€™ efforts by using machines to automatically make diagnosis and treatment recommendations. The recent success in deep learning has largely advanced the field of computer-aided diagnosis by offering an avenue to deliver automated medical image analysis. Despite such progress, there remain several challenges towards medical machine intelligence, such as unsatisfactory performance regarding challenging small targets, insufficient training data, high annotation cost, the lack of domain-specific knowledge, etc. These challenges cultivate the need for developing data-efficient and knowledge-aware deep learning techniques which can generalize to different medical tasks without requiring intensive manual labeling efforts, and incorporate domain-specific knowledge in the learning process. In this thesis, we rethink the current progress of deep learning in medical image analysis, with a focus on the aforementioned challenges, and present different data-efficient and knowledge-aware deep learning approaches to address them accordingly. Firstly, we introduce coarse-to-fine mechanisms which use the prediction from the first (coarse) stage to shrink the input region for the second (fine) stage, to enhance the model performance especially for segmenting small challenging structures, such as the pancreas which occupies only a very small fraction (e.g., < 0.5%) of the entire CT volume. The method achieved the state-of-the-art result on the NIH pancreas segmentation dataset. Further extensions also demonstrated effectiveness for segmenting neoplasms such as pancreatic cysts or multiple organs. Secondly, we present a semi-supervised learning framework for medical image segmentation by leveraging both limited labeled data and abundant unlabeled data. Our learning method encourages the segmentation output to be consistent for the same input under different viewing conditions. More importantly, the outputs from different viewing directions are fused altogether to improve the quality of the target, which further enhances the overall performance. The comparison with fully-supervised methods on multi-organ segmentation confirms the effectiveness of this method. Thirdly, we discuss how to incorporate knowledge priors for multi-organ segmentation. Noticing that the abdominal organ sizes exhibit similar distributions across different cohorts, we propose to explicitly incorporate anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. The approach achieves 84.97% on the MICCAI 2015 challenge โ€œMulti-Atlas Labeling Beyond the Cranial Vaultโ€, which significantly outperforms previous state-of-the-art even using fewer annotations. Lastly, by rethinking how radiologists interpret medical images, we identify one limitation for existing deep-learning-based works on detecting pancreatic ductal adenocarcinoma is the lack of knowledge integration from multi-phase images. Thereby, we introduce a dual-path network where different paths are connected for multi-phase information exchange, and an additional loss is added for removing view divergence. By effectively incorporating multi-phase information, the presented method shows superior performance than prior arts on this matter
    corecore