15,517 research outputs found

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    HDR imaging techniques applied to a Schlieren set-up for aerodynamics studies

    Get PDF
    La tecnica schlieren rivela i cambiamenti di densitร  in un flusso. Per visualizzare deboli dettagli รจ necessario usare alte sensibilitร , ma ciรฒ fa saturare le regioni piรน intense con conseguente perdita di informazioni. La tecnica fotografica dell'HDR รจ stata applicata ad un apparato schlieren per ripristinare queste aree sature e visualizzare dettagli deboli e intensi del flusso in una singola foto. Sono stati studiati flussi subsonici e supersonici, stazionari e non

    ํŠน์ง• ํ˜ผํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•œ ์˜์ƒ ์ •ํ•ฉ ๊ธฐ๋ฒ•๊ณผ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ• ๋ฐ ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์—์„œ์˜ ์‘์šฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2020. 8. ์กฐ๋‚จ์ต.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋ฐฐ๊ฒฝ ๋ฐ ์ „๊ฒฝ์˜ ์›€์ง์ž„์ด ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•์„ ์œ„ํ•œ ๋”ฅ ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์›€์ง์ž„์ด ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ์ดฌ์˜๋œ ๋…ธ์ถœ์ด ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ ์˜ ์ƒ๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ์–ด๋ ค์šด ์ž‘์—…์ด๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์—, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค์€ ์ด๋ฏธ์ง€๋“ค์„ ํ•ฉ์„ฑํ•˜๊ธฐ ์ „์— ํŒจ์น˜ ๋งค์นญ, ์˜ตํ‹ฐ์ปฌ ํ”Œ๋กœ์šฐ, ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ๋ณ€ํ™˜ ๋“ฑ์„ ์ด์šฉํ•˜์—ฌ ๊ทธ ์ด๋ฏธ์ง€๋“ค์„ ๋จผ์ € ์ •๋ ฌํ•œ๋‹ค. ์‹ค์ œ๋กœ ๋…ธ์ถœ ์ •๋„๊ฐ€ ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋“ค์„ ์ •๋ ฌํ•˜๋Š” ๊ฒƒ์€ ์•„์ฃผ ์–ด๋ ค์šด ์ž‘์—…์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋“ค๋กœ๋ถ€ํ„ฐ ์–ป์€ ์ •๋ณด๋ฅผ ์„ž์–ด์„œ ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ํ•ฉ์„ฑํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ๋” ๋ฐ๊ฒŒ ํ˜น์€ ์–ด๋‘ก๊ฒŒ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€๋“ค์„ ์ค‘๊ฐ„ ๋ฐ๊ธฐ๋กœ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ •๋ ฌํ•œ๋‹ค. ์ฃผ์š”ํ•œ ์•„์ด๋””์–ด๋Š” ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ํ•ฉ์„ฑํ•  ๋•Œ ํŠน์ง• ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ์ด๋Š” ํ”ฝ์…€ ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๊ฑฐ๋‚˜ ๊ธฐํ•˜ํ•™์  ๋ณ€ํ™˜์„ ์ด์šฉํ•  ๋•Œ ๋ณด๋‹ค ๋” ์ข‹์€ ์ •๋ ฌ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ–๋Š”๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ์ •๋ ฌ ๋„คํŠธ์›Œํฌ๋Š” ๋‘ ๊ฐˆ๋ž˜์˜ ์ธ์ฝ”๋”์™€ ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค ๊ทธ๋ฆฌ๊ณ  ๋””์ฝ”๋”๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ์ธ์ฝ”๋”๋“ค์€ ๋‘ ์ž…๋ ฅ ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ , ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค์ด ์ด ํŠน์ง•๋“ค์„ ์„ž๋Š”๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋””์ฝ”๋”์—์„œ ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•์—์„œ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก ๋…ธ์ถœ ์ •๋„๊ฐ€ ํฌ๊ฒŒ ์ฐจ์ด๋‚˜๋Š” ์˜์ƒ์—์„œ๋„ ์ž˜ ์ž‘๋™ํ•œ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ๊ฐ„๋‹จํ•œ ๋ณ‘ํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์ „์ฒด ๋„คํŠธ์›Œํฌ๋“ค์„ ํ•œ ๋ฒˆ์— ํ•™์Šตํ•จ์œผ๋กœ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค ๋ณด๋‹ค ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ–๋Š”๋‹ค. ๋˜ํ•œ, ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋™์˜์ƒ ๋‚ด ํ”„๋ ˆ์ž„๋“ค์„ ์ด์šฉํ•˜๋Š” ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™” ๋ฐฉ๋ฒ•์„ ์œ„ํ•œ ๋”ฅ ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๋™์˜์ƒ ๋‚ด ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค ์‚ฌ์ด์—๋Š” ์›€์ง์ž„์ด ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ณ  ํ•ด์ƒ๋„์˜ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์€ ์•„์ฃผ ์–ด๋ ค์šด ์ž‘์—…์ด๋‹ค. ๋”ฐ๋ผ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค์€ ์ด ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ์ •๋ ฌํ•˜๊ธฐ ์œ„ํ•ด ์˜ตํ‹ฐ์ปฌ ํ”Œ๋กœ์šฐ๋ฅผ ๊ณ„์‚ฐํ•˜๊ฑฐ๋‚˜ STN์„ ์ถ”๊ฐ€ํ•œ๋‹ค. ์›€์ง์ž„์ด ์กด์žฌํ•˜๋Š” ํ”„๋ ˆ์ž„๋“ค์„ ์ •๋ ฌํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์šด ๊ณผ์ •์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค๋กœ๋ถ€ํ„ฐ ์–ป์€ ์ •๋ณด๋ฅผ ์„ž์–ด์„œ ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ์ด์›ƒํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ๋ชฉํ‘œ ํ”„๋ ˆ์ž„์„ ๊ธฐ์ค€์œผ๋กœ ์ •๋ ฌํ•œ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ฃผ์š” ์•„์ด๋””์–ด๋Š” ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•  ๋•Œ ํŠน์ง• ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Š” ํ”ฝ์…€ ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๊ฑฐ๋‚˜ ๊ธฐํ•˜ํ•™์  ๋ณ€ํ™˜์„ ์ด์šฉํ•  ๋•Œ ๋ณด๋‹ค ๋” ์ข‹์€ ์ •๋ ฌ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ–๋Š”๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ์ •๋ ฌ ๋„คํŠธ์›Œํฌ๋Š” ๋‘ ๊ฐˆ๋ž˜์˜ ์ธ์ฝ”๋”์™€ ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค ๊ทธ๋ฆฌ๊ณ  ๋””์ฝ”๋”๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ์ธ์ฝ”๋”๋“ค์€ ๋‘ ์ž…๋ ฅ ํ”„๋ ˆ์ž„์œผ๋กœ๋ถ€ํ„ฐ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ , ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค์ด ์ด ํŠน์ง•๋“ค์„ ์„ž๋Š”๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋””์ฝ”๋”์—์„œ ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ์ƒ์„ฑํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ์ž˜ ์ •๋ ฌํ•˜๋ฉฐ, ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์— ํšจ๊ณผ์ ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€ ๋ณ‘ํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์ „์ฒด ๋„คํŠธ์›Œํฌ๋“ค์„ ํ•œ ๋ฒˆ์— ํ•™์Šตํ•จ์œผ๋กœ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ์—ฌ๋Ÿฌ ๋ฐฉ๋ฒ•๋“ค ๋ณด๋‹ค ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ–๋Š”๋‹ค. ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•๊ณผ ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์— ๋”ํ•˜์—ฌ, ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ํ•œ ๋ฒˆ์— ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋”ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์•ž์—์„œ ์ œ์•ˆ๋œ ๋‘ ๋„คํŠธ์›Œํฌ๋“ค์€ ๊ฐ๊ฐ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ํ•˜์ง€๋งŒ, ๊ทธ๋“ค์€ ํ•˜๋‚˜์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ํ†ตํ•ด ํ•œ ๋ฒˆ์— ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๋น„๋””์˜ค ๊ณ ํ•ด์ƒํ™”๋ฅผ ์œ„ํ•ด ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ์™€ ๊ฐ™์€ ๊ตฌ์กฐ์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•˜๋ฉฐ, ๋” ๋†’์€ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ๊ฐ–๋Š” ์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•๊ณผ ๋น„๋””์˜ค ๊ณ ํ•ด์ƒํ™”๋ฅผ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ๋“ค์„ ์กฐํ•ฉํ•˜๋Š” ๊ฒƒ ๋ณด๋‹ค ์ •์„ฑ์ ์œผ๋กœ ๊ทธ๋ฆฌ๊ณ  ์ •๋Ÿ‰์ ์œผ๋กœ ๋” ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋งŒ๋“ค์–ด ๋‚ธ๋‹ค.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto

    Photorealistic Texturing for Modern Video Games

    Get PDF
    Simulating realism has become a standard for many games in the industry. While real-time rendering requires considerable rendering resources, texturing defines the physical parameters of the surfaces with a lower computer power. The objective of this thesis was to study the evolution of Texture Mapping and define a workflow for approaching a photorealism with modern instruments for video game production. All the textures were created with the usage of Agisoft Photoscan, Substance Designer & Paintrer, Abode Photoshop and Pixologic Zbrush. With the aid of both the theory and practical approaches, this thesis explores the questions of how the textures are used and which applications can help to build them for a better result. Each workflow is introduced with the main points of their purposes as the authorโ€™s suggestion, which can be used as a guideline for many companies, including Ringtail Studios Oรœ. In conclusion, the thesis summarizes the outcome of the textures and their workflow. The results are successfully established by the author with attendance to introduce methods for the material production

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    Procedural textures generation: adaptation into a Unity tool

    Get PDF
    This is a description document that explains the procedure and development of the โ€œProcedural Texture Generatorโ€ tool for Unity. It is a node-based editor that allows the user to create textures by generating different types of noises, combining them, and using several filters to generate the different textures needed for PBR materials. In this document there are described the techniques and procedures used for procedural texturing and the adaptation into a Unity tool, as well as all the difficulties encountered during the development

    Weighted Least Squares Based Detail Enhanced Exposure Fusion

    Get PDF

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
    • โ€ฆ
    corecore