7 research outputs found

    Computational Imaging with Limited Photon Budget

    Get PDF
    The capability of retrieving the image/signal of interest from extremely low photon flux is attractive in scientific, industrial, and medical imaging applications. Conventional imaging modalities and reconstruction algorithms rely on hundreds to thousands of photons per pixel (or per measurement) to ensure enough signal-to-noise (SNR) ratio for extracting the image/signal of interest. Unfortunately, the potential of radiation or photon damage prohibits high SNR measurements in dose-sensitive diagnosis scenarios. In addition, imaging systems utilizing inherently weak signals as contrast mechanism, such as X-ray scattering-based tomography, or attosecond pulse retrieval from the streaking trace, entail prolonged integration time to acquire hundreds of photons, thus rendering high SNR measurement impractical. This dissertation addresses the problem of imaging from limited photon budget when high SNR measurements are either prohibitive or impractical. A statistical image reconstruction framework based on the knowledge of the image-formation process and the noise model of the measurement system has been constructed and successfully demonstrated on two imaging platforms โ€“ photon-counting X-ray imaging, and attosecond pulse retrieval. For photon-counting X-ray imaging, the statistical image reconstruction framework achieves high-fidelity X-ray projection and tomographic image reconstruction from as low as 16 photons per pixel on average. The capability of our framework in modeling the reconstruction error opens the opportunity of designing the optimal strategies to distribute a fixed photon budget for region-of-interest (ROI) reconstruction, paving the way for radiation dose management in an imaging-specific task. For attosecond pulse retrieval, a learning-based framework has been incorporated into the statistical image reconstruction to retrieve the attosecond pulses from the noisy streaking traces. Quantitative study on the required signal-to-noise ratio for satisfactory pulse retrieval enabled by our framework provides a guideline to future attosecond streaking experiments. In addition, resolving the ambiguities in the streaking process due to the carrier envelop phase has also been demonstrated with our statistical reconstruction framework

    Feature-preserving Reduction and Visualization of Industrial CT data using GLCM texture analysis and Mass-spring Model Deformation

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 8. ์‹ ์˜๊ธธ.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” 3D ๋ณผ๋ฅจ ๋ฐ์ดํ„ฐ์—์„œ ์ค‘์š”ํ•œ ์˜์—ญ์„ ๋ณด์กดํ•˜๋ฉด์„œ ํฌ๊ธฐ๋ฅผ ์ค„์ด๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณผ๋ฅจ ๋ฐ์ดํ„ฐ์—์„œ ์–ด๋Š ๋ถ€๋ถ„์ด ์ค‘์š”ํ•œ ์˜์—ญ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ์งˆ๊ฐ ๋ถ„์„ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ธ GLCM ๊ท ์ผ๋„๋ฅผ ์ด์šฉํ•œ ์ค‘์š”๋„ ์ธก์ • ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜๊ณ , ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ MSM ๊ธฐ๋ฐ˜์˜ ๋ณผ๋ฅจ ๋ณ€ํ˜•์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์ค‘์š”๋„๊ฐ€ ๋ฐ˜์˜๋œ ๋ณผ๋ฅจ ๋ณ€ํ˜• ๊ณผ์ •์„ ํ†ตํ•ด, ์ค‘์š”ํ•œ ์˜์—ญ์€ ์ƒ๋Œ€์ ์œผ๋กœ ํฌ๊ธฐ๊ฐ€ ํ™•์žฅ๋˜๋Š” ๋ฐ˜๋ฉด, ๋œ ์ค‘์š”ํ•œ ์˜์—ญ์€ ์ค„์–ด๋“ค๊ฒŒ ๋œ๋‹ค. ์ด๋กœ ์ธํ•ด, ์ผ๋ฐ˜์ ์œผ๋กœ ์†์‹ค๋ฅ ์ด ๋†’์€ ๊ท ์ผ ๋‹ค์šด์ƒ˜ํ”Œ๋ง์„ ์ด์šฉํ•œ ์••์ถ• ํ›„์—๋„ ์ž‘์€ ํฌ๊ธฐ์˜ ์ค‘์š”ํ•œ ํŠน์ง•์ ๋“ค์ด ์†์‹ค๋˜์ง€ ์•Š๊ณ  ๋ณด์กด๋  ์ˆ˜ ์žˆ๋‹ค. ์‹ค์ธก ์‚ฐ์—… ์˜์ƒ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ์‹คํ—˜์„ ํ†ตํ•ด, ๊ทธ๋ƒฅ ๊ท ์ผ ๋‹ค์šด์ƒ˜ํ”Œ๋ง์„ ์ด์šฉํ•œ ์••์ถ• ๊ฒฐ๊ณผ์—์„œ๋Š” ์‚ฌ๋ผ์ง„ ์ž‘์€ ๊ธฐ๊ณต์ด๋‚˜ ์ˆ˜์ถ• ๊ท ์—ด ํ˜•ํƒœ์˜ ๊ฒฐํ•จ ์˜์—ญ์ด ์ œ์•ˆ ๋ฐฉ๋ฒ•์—์„œ๋Š” ๋ณด์กด๋˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด ๋ณ€ํ˜• ๋ณผ๋ฅจ์„ ์›๋ž˜ ํ˜•ํƒœ๋กœ ๊ฐ€์‹œํ™”ํ•˜๊ธฐ ์œ„ํ•ด์„  ์—ญ๋ณ€ํ˜• ๊ณผ์ •์„ ์ถ”๊ฐ€๋กœ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•˜์ง€๋งŒ, ์ด ๊ณ„์‚ฐ์€ ๊ฐ€์‹œํ™” ๊ณผ์ •์— ๊ฐ„๋‹จํ•˜๊ฒŒ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•œ ์†Œ์š”์‹œ๊ฐ„์— ์œ ์˜๋ฏธํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š๋Š”๋‹ค.Non-destructive testing is a method which examines the internal structures of industrial components such as various machine parts without dissecting them. Recently, 3D CT based analysis enables more accurate inspection than traditional X-ray based tests. However, manipulating volumetric data acquired by CT is still challenging due to its huge size of the volume data. This dissertation proposes a novel method that reduces the size of 3D volume data while preserving important features in the data. Our method quantifies the importance of features in the 3D data based on gray level co-occurrence matrix (GLCM) texture analysis and represents the volume data using a simple mass-spring model. According to the measured importance value, blocks containing important features expand while other blocks shrink. After deformation, small features are exaggerated on deformed volume space, and more likely to survive during the uniform volume reduction. Experimental results showed that our method well preserved the small features of the original volume data during the reduction without any artifact comparing with the previous methods. Although additional inverse deformation process was required for the rendering of the deformed volume data, the rendering speed of the deformed volume data was much faster than that of the original volume data.์ดˆ๋ก i ๋ชฉ์ฐจ iii ๊ทธ๋ฆผ ๋ชฉ์ฐจ vi ํ‘œ ๋ชฉ์ฐจ x 1์žฅ ์„œ๋ก  1 1.1 ๋ณผ๋ฅจ ๋ Œ๋”๋ง 1 1.2 ๋น„ํŒŒ๊ดด๊ฒ€์‚ฌ 2 1.3 ์—ฐ๊ตฌ ๋‚ด์šฉ 4 1.4 ๋…ผ๋ฌธ์˜ ๊ตฌ์„ฑ 6 2์žฅ ๊ด€๋ จ ์—ฐ๊ตฌ 7 2.1 ๋ณผ๋ฅจ ๋ Œ๋”๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜ 7 2.1.1 ๋ณผ๋ฅจ ๋ฐ์ดํ„ฐ์˜ ํŠน์„ฑ 7 2.1.2 ํ‘œ๋ฉด ์ถ”์ถœ ๊ธฐ๋ฒ• 8 2.1.3 ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง 10 2.2 ์••์ถ• ๋ณผ๋ฅจ ๋ Œ๋”๋ง 17 2.2.1 ๋ฒกํ„ฐ ์–‘์žํ™” 18 2.2.2 ๋ณ€ํ™˜ ๋ถ€ํ˜ธํ™” 19 2.2.3 ๋‹ค์ค‘-ํ•ด์ƒ๋„ ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ• 23 2.2.4 ๋ณผ๋ฅจ ๋ณ€ํ˜• ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ• 25 2.3 ์งˆ๋Ÿ‰-์Šคํ”„๋ง ๊ธฐ๋ฐ˜ ๋ณผ๋ฅจ ๋ณ€ํ˜• ๋ชจ๋ธ 27 2.4 ์‚ฐ์—…์šฉ CT ์˜์ƒ์˜ ์ค‘์š” ํŠน์ง•์  ์ธก๋Ÿ‰ ๋ฐฉ๋ฒ• 30 3์žฅ ์ค‘์š”๋„ ์ธก์ • ๊ธฐ๋ฒ• 32 3.1 ๋ช…์•”๋„ ๋™์‹œ๋ฐœ์ƒ ํ–‰๋ ฌ 32 3.2 GLCM ๊ท ์ผ๋„ ๊ธฐ๋ฐ˜ ์ค‘์š”๋„ ๋ชจ๋ธ 36 3.3 ๊ณต๊ธฐ ์˜์—ญ ์ œ๊ฑฐ 44 4์žฅ ๋ณผ๋ฅจ ๋ณ€ํ˜•, ์ถ•์†Œ ๋ฐ ๊ฐ€์‹œํ™” 47 4.1 ์งˆ๋Ÿ‰-์Šคํ”„๋ง ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ณผ๋ฅจ ๋ณ€ํ˜• 47 4.2 ๋ณผ๋ฅจ ์ถ•์†Œ 54 4.3 ์—ญ๋ณ€ํ˜• ๋ฐ ๋ Œ๋”๋ง 55 5์žฅ ์‹คํ—˜ ๋ฐ ๊ฒฐ๊ณผ 58 5.1 ํ™”์งˆ ํ‰๊ฐ€ 60 5.2 ์†๋„ ํ‰๊ฐ€ 65 5.3 ํŒŒ๋ผ๋ฏธํ„ฐ ์—ฐ๊ตฌ 69 6์žฅ ๊ฒฐ๋ก  74 6.1 ์š”์•ฝ 74 6.2 ํ–ฅํ›„ ์—ฐ๊ตฌ 75 ์ฐธ๊ณ ๋ฌธํ—Œ 77 Abstract 83Docto

    Interactive Feature Selection and Visualization for Large Observational Data

    Get PDF
    Data can create enormous values in both scientific and industrial fields, especially for access to new knowledge and inspiration of innovation. As the massive increases in computing power, data storage capacity, as well as capability of data generation and collection, the scientific research communities are confronting with a transformation of exploiting the advanced uses of the large-scale, complex, and high-resolution data sets in situation awareness and decision-making projects. To comprehensively analyze the big data problems requires the analyses aiming at various aspects which involves of effective selections of static and time-varying feature patterns that fulfills the interests of domain users. To fully utilize the benefits of the ever-growing size of data and computing power in real applications, we proposed a general feature analysis pipeline and an integrated system that is general, scalable, and reliable for interactive feature selection and visualization of large observational data for situation awareness. The great challenge tackled in this dissertation was about how to effectively identify and select meaningful features in a complex feature space. Our research efforts mainly included three aspects: 1. Enable domain users to better define their interests of analysis; 2. Accelerate the process of feature selection; 3. Comprehensively present the intermediate and final analysis results in a visualized way. For static feature selection, we developed a series of quantitative metrics that related the user interest with the spatio-temporal characteristics of features. For timevarying feature selection, we proposed the concept of generalized feature set and used a generalized time-varying feature to describe the selection interest. Additionally, we provided a scalable system framework that manages both data processing and interactive visualization, and effectively exploits the computation and analysis resources. The methods and the system design together actualized interactive feature selections from two representative large observational data sets with large spatial and temporal resolutions respectively. The final results supported the endeavors in applications of big data analysis regarding combining the statistical methods with high performance computing techniques to visualize real events interactively

    3์ฐจ์› ์˜๋ฃŒ ์˜์ƒ ํŒ๋… ์‹œ์„  ์ •๋ณด์˜ ๋Œ€ํ™”ํ˜• ์‹œ๊ฐ์  ๋ถ„์„ ํ”„๋ ˆ์ž„์›Œํฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ์„œ์ง„์šฑ.We propose an interactive visual analytics framework for diagnostic gaze data on volumetric medical images. The framework is designed to compare gaze data from multiple readers with effective visualizations, which are tailored for volumetric gaze data with additional contextual information. Gaze pattern comparison is essential to understand how radiologists examine medical images and to identify factors influencing the examination. However, prior work on diagnostic gaze data using the medical images acquired from volumetric imaging systems (e.g., computed tomography or magnetic resonance imaging) showed a number of limitations in comparative analysis. In the diagnosis, radiologists scroll through a stack of images to get a 3D cognition of organs and lesions that resulting gaze patterns contain additional depth information compared to the gaze tracking study with 2D stimuli. As a result, the additional spatial dimension aggravated the complexity on visual representation of gaze data. A recent work proposed a visualization design based on direct volume rendering (DVR) for gaze patterns in volumetric imageshowever, effective and comprehensive gaze pattern comparison is still challenging due to lack of interactive visualization tools for comparative gaze analysis. In this paper, we first present an effective visual representation, and propose an interactive analytics framework for multiple volumetric gaze data. We also take the challenge integrating crucial contextual information such as pupil size and windowing (i.e., adjusting brightness and contrast of image) into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components, a context-embedded interactive scatterplot (CIS) is especially designed to help users to examine abstract gaze data in diverse contexts by embedding medical imaging representations well-known to radiologists in it. We also present the results from case studies with chest and abdominal radiologistsChapter 1 Introduction 1 1.1 Background 1 1.2 Research Components 5 1.3 Radiological Practice 6 1.4 Organization of the Dissertation 8 Chapter 2 Related Work 9 2.1 Visualization Combining 2D and 3D 9 2.2 Eye Tracking Data Visualization 14 2.3 Comparative Data Analysis 16 2.4 Gaze Analysis in the Medical field 18 Chapter 3 GazeVis: Volumetric Gaze Data 21 3.1 Visualization of Stimuli and Gaze Data 23 3.1.1 Computation of Gaze Field 26 3.1.2 Visualization of Gaze Field 29 3.1.3 Gaze Field for Interactive Information Seeking 30 3.2 Interactions and Dynamic Queries 32 3.2.1 Interaction Design 32 3.2.2 Spatial Filtering 34 3.2.3 Temporal Filtering 34 3.2.4 Transfer Function Control 36 3.2.5 Gaussian Blur Control 38 3.3 Implementation 38 3.4 Evaluation with Radiologists 38 3.4.1 Case Study Protocol 39 3.4.2 Datasets 41 3.4.3 Apparatus 42 3.4.4 Chest Radiologists 42 3.4.5 Abdominal Radiologists 45 3.5 Discussion 49 3.5.1 Spatial Data Structure and Flexibility 49 3.5.2 Interacting with Contextual Data 51 Chapter 4 GazeDx: Interactive Gaze Analysis Framework 53 4.1 Design Rationale 54 4.2 Overviews for Comparative Gaze Analysis 57 4.2.1 Spatial Similarity 57 4.2.2 Qualitative Similarity Overview 58 4.2.3 Multi-level Temporal Overview 60 4.3 In-depth Comparison of Gaze Patterns 65 4.3.1 Detail Views for Individual Readers 65 4.3.2 Aggregation for Group Comparison 67 4.4 CIS: Context-embedded Interactive Scatterplot 68 4.4.1 Flexible Axis Configuration 68 4.4.2 Focus Attention with Familiar Representations 69 4.4.3 Scatterplot Matrix with CIS 72 4.5 Interactive Selection and Filtering 72 4.5.1 Selection by Freehand Drawing 73 4.5.2 Selection by Human Anatomy 74 4.6 Implementation 76 4.7 Case Studies 77 4.7.1 Case Study Protocol 78 4.7.2 Apparatus 80 4.7.3 Case Study 1: Chest Radiologists 81 4.7.4 Case Study 2: Abdominal Radiologists 85 4.8 Discussion 88 Chapter 5 Conclusion 91 Bibliography 94 Abstract in Korean 105Docto

    Medical Volume Visualization Beyond Single Voxel Values

    Full text link
    corecore