13 research outputs found
Ray-Traced Collision Detection : Interpenetration Control and Multi-GPU Performance
International audienceWe proposed [LGA13] an iterative ray-traced collision detection algorithm (IRTCD) that exploits spatial and temporal coherency and proved to be computationally efficient but at the price of some geometrical approximations that allow more interpenetration than needed. In this paper, we present two methods to efficiently control and reduce the interpenetration without noticeable computation overhead. The first method predicts the next potentially colliding vertices. These predictions are used to make our IRTCD algorithm more robust to the above-mentioned approximations, therefore reducing the errors up to 91%. We also present a ray re-projection algorithm that improves the physical response of ray-traced collision detection algorithm. This algorithm also reduces, up to 52%, the interpenetration between objects in a virtual environment. Our last contribution shows that our algorithm, when implemented on multi-GPUs architectures, is far faster
๊ฐ์ํ์ค์์ ๋ชธ์ ์์ธ์ ๊ณต๊ฐ์ธ์ง, ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ, ์กด์ฌ๊ฐ, ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ํธ์์ฉ์ ๋ํ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ์ธ๋ฌธ๋ํ ํ๋๊ณผ์ ์ธ์ง๊ณผํ์ ๊ณต, 2021. 2. ์ด๊ฒฝ๋ฏผ.๊ฐ์ํ์ค์ ๋ชธ๊ณผ ๋ง์์ด ๊ณต๊ฐ์ ํจ๊ป ์กด์ฌํ๋ค๋ ์ผ์์ ๊ฒฝํ์ ๋ํด ์๋ก์ด ๊ด์ ์ ์ ์ํ๋ค. ์ปดํจํฐ๋ก ๋งค๊ฐ๋ ์ปค๋ฎค๋์ผ์ด์
์์ ๋ง์ ๊ฒฝ์ฐ ์ฌ์ฉ์๋ค์ ๋ชธ์ ๋ฐฐ์ ๋๋ฉฐ ๋ง์์ ์กด์ฌ๊ฐ ์ค์ํ๋ค๊ณ ๋๋ผ๊ฒ ๋๋ค. ์ด์ ๊ด๋ จํ์ฌ ๊ฐ์ํ์ค์ ์ฌ์ฉ์๋ค์๊ฒ ์ปค๋ฎค๋์ผ์ด์
์ ์์ด ๋ฌผ๋ฆฌ์ ๋ชธ์ ์ญํ ๊ณผ ๋น์ฒดํ๋ ์ํธ์์ฉ์ ์ค์์ฑ์ ๋ํด ์ฐ๊ตฌํ ์ ์๋ ๊ธฐํ๋ฅผ ์ ๊ณตํ๋ค.
๊ธฐ์กด ์ฐ๊ตฌ์ ์ํ๋ฉด ์คํ, ์ฃผ์์ง์ค, ๊ธฐ์ต, ์ง๊ฐ๊ณผ ๊ฐ์ ์ธ์ง๊ธฐ๋ฅ๋ค์ด ๋ชธ์ ์์ธ์ ๋ฐ๋ผ ๋ค๋ฅด๊ฒ ์์ฉํ๋ค๊ณ ํ๋ค. ํ์ง๋ง ์ด์ ๊ฐ์ ์ธ์ง๊ธฐ๋ฅ๋ค๊ณผ ๋ชธ ์์ธ์ ์ํธ์ฐ๊ด์ฑ์ ์ฌ์ ํ ๋ช
ํํ ๋ฐํ์ง๊ณ ์์ง ์๋ค. ํนํ ๊ฐ์ํ์ค์์ ๋ชธ์ ์์ธ๊ฐ ์ง๊ฐ๋ฐ์์ ๋ํ ์ธ์ง๊ณผ์ ์ ์ด๋ค ์์ฉ์ ํ๋์ง์ ๋ํ ์ดํด๋ ๋งค์ฐ ๋ถ์กฑํ ์ํฉ์ด๋ค.
๊ฐ์ํ์ค ์ฐ๊ตฌ์๋ค์ ์กด์ฌ๊ฐ์ ๊ฐ์ํ์ค์ ํต์ฌ ๊ฐ๋
์ผ๋ก ์ ์ํ์์ผ๋ฉฐ ํจ์จ์ ์ธ ๊ฐ์ํ์ค ์์คํ
๊ตฌ์ฑ๊ณผ ๋ฐ์ ํ ๊ด๊ณ๊ฐ ์๋ค๊ณ ํ๋ค. ์กด์ฌ๊ฐ์ ๊ฐ์๊ณต๊ฐ์ ์๋ค๊ณ ๋๋ผ๋ ์์์ํ๋ฅผ ๋งํ๋ค. ๊ตฌ์ฒด์ ์ผ๋ก ๊ฐ์ํ์ค ์ ๊ฒฝํ์ ์ค์ฌ ์กด์ฌํ๋ค๊ณ ๋๋ผ๋ ์์์ํ๋ฅผ ๋งํ๋ค. ์ด๋ฐ ์กด์ฌ๊ฐ์ด ๋์ ์๋ก ํ์ค์ฒ๋ผ ์ธ์งํ๊ธฐ์ ์กด์ฌ๊ฐ์ ๊ฐ์ํ์ค ๊ฒฝํ์ ์ธก์ ํ๋ ์ค์ํ ์งํ์ด๋ค. ๋ฐ๋ผ์ ๊ฐ์๊ณต๊ฐ์ ์กด์ฌํ๊ณ ์๋ค๋ ์์์ ๊ฒฝํ ((๊ฑฐ๊ธฐ์ ์๋ค(being there)), ์ฆ ์กด์ฌ๊ฐ์ ๋งค๊ฐ๋ ๊ฐ์๊ฒฝํ๋ค์ ์ธ์ง ์ฐ๊ตฌ์ ์ค์ํ ๊ฐ๋
์ด๋ค.
๊ฐ์ํ์ค์ ์ฌ์ด๋ฒ๋ฉ๋ฏธ๋ฅผ ์ ๋ฐํ๋ ๊ฒ์ผ๋ก ์๋ ค์ ธ ์๋ค. ์ด ์ฆ์์ ๊ฐ์ํ์ค์ ์ฌ์ฉ์ฑ์ ์ ์ฝํ๋ ์ฃผ์ ์์ธ์ผ๋ก ํจ๊ณผ์ ์ธ ๊ฐ์ํ์ค ๊ฒฝํ์ ์ํด ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ๋ํ ๋ค์ํ ์ฐ๊ตฌ๊ฐ ํ์ํ๋ค. ์ฌ์ด๋ฒ๋ฉ๋ฏธ๋ ๊ฐ์ํ์ค ์์คํ
์ ์ฌ์ฉํ ๋ ๋ํ๋๋ฉฐ ์ด์ง๋ฌ์, ๋ฐฉํฅ์์ค, ๋ํต, ๋ํ๋ฆผ, ๋ํผ๋ก๋๋ฑ์ ์ฆ์์ ํฌํจํ๋ค. ์ด๋ฐ ์ฌ์ด๋ฒ๋ฉ๋ฏธ์๋ ๊ฐ์ธ์ฐจ, ์ฌ์ฉ๋ ๊ธฐ์ , ๊ณต๊ฐ๋์์ธ, ์ํ๋ ์
๋ฌด๋ฑ ๋งค์ฐ ๋ค์ ์์ธ๋ค์ด ๊ด์ฌํ๊ณ ์์ด ๋ช
ํํ ์์ธ์ ๊ท์ ํ ์ ์๋ค. ์ด๋ฐ ๋ฐฐ๊ฒฝ์ผ๋ก ์ธํด ์ฌ์ด๋ฒ๋ฉ๋ฏธ ์ ๊ฐ๊ณผ ๊ด๋ จํ ๋ค์ํ ์ฐ๊ตฌ๋ค์ด ํ์ํ๋ฉฐ ์ด๋ ๊ฐ์ํ์ค ๋ฐ์ ์ ์ค์ํ ์๋ฏธ๋ฅผ ๊ฐ๋๋ค.
๊ณต๊ฐ์ธ์ง๋ 3์ฐจ์ ๊ณต๊ฐ์์ ์ ์ฒด ์์ง์๊ณผ ๋์๊ณผ์ ์ํธ์์ฉ์ ์ค์ํ ์ญํ ์ ํ๋ ์ธ์ง์์คํ
์ด๋ค. ๊ฐ์๊ณต๊ฐ์์ ์ ์ฒด ์์ง์์ ๋ค๋น๊ฒ์ด์
, ์ฌ๋ฌผ์กฐ์, ๋ค๋ฅธ ์์ด์ ํธ๋ค๊ณผ ์ํธ์์ฉ์ ๊ด์ฌํ๋ค. ํนํ ๊ฐ์๊ณต๊ฐ์์ ๋ค๋น๊ฒ์ด์
์ ์์ฃผ ์ฌ์ฉ๋๋ ์ค์ํ ์ํธ์์ฉ ๋ฐฉ์์ด๋ค. ์ด์ ๊ฐ์๊ณต๊ฐ์ ๋ค๋น๊ฒ์ด์
ํ ๋ ์กด์ฌ๊ฐ์ ์ํฅ์ ์ฃผ์ง ์๊ณ ๋ฉ๋ฏธ์ฆ์์ ์ ๋ฐํ์ง ์๋ ํจ๊ณผ์ ์ธ ๊ณต๊ฐ์ด๋ ๋ฐฉ๋ฒ์ ๋ํ ๋ค์ํ ์ฐ๊ตฌ๋ค์ด ์ด๋ฃจ์ด์ง๊ณ ์๋ค.
์ด์ ์ฐ๊ตฌ๋ค์ ์ํ๋ฉด ์์ ์ด ์กด์ฌ๊ฐ๊ณผ ์ฒดํ๊ฐ์ ์ํฅ์ ์ค๋ค๊ณ ํ๋ค. ์ด๋ ์์ ์ ๋ฐ๋ผ ์ฌ์ฉ์์ ํ๋๊ณผ ๋์๋ค๊ณผ์ ์ํธ์์ฉ ๋ฐฉ์์ ๋ฌ๋ผ์ง๊ธฐ ๋๋ฌธ์ด๋ค. ๋ฐ๋ผ์ ๊ฐ์๊ณต๊ฐ์์ ๊ฒฝํ ๋ํ ์์ ์ ๋ฐ๋ผ ๋ฌ๋ผ์ง๋ค. ์ด๋ฐ ๋ฐฐ๊ฒฝ์ผ๋ก ๋ชธ์ ์์ธ, ๊ณต๊ฐ์ธ์ง, ์ด๋๋ฐฉ๋ฒ, ์กด์ฌ๊ฐ, ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ํธ ์ฐ๊ด์ฑ์ ๋ํ ์ฐ๊ตฌ๋ฅผ ์์ ์ ๋ฐ๋ผ ๋ถ๋ฅํด์ ์ฐ๊ตฌํ ํ์๊ฐ ์๋ค. ์ด๋ฅผ ํตํด ๊ฐ์ํ์ค ์ ๊ณต๊ฐ ๋ค๋น๊ฒ์ด์
์ ๋ํ ์ธ์ง๊ณผ์ ์ ๋ณด๋ค ๋ค๊ฐ์ ์ผ๋ก ์ดํด ํ ์ ์์ ๊ฒ์ด๋ค.
๊ทธ๋์ ์กด์ฌ๊ฐ๊ณผ ์ฌ์ด๋ฒ ๋ฉ๋ฏธ์ ๋ด์ฌ๋ ๋งค์ปค๋์ฆ์ ์ดํดํ๊ธฐ ์ํด ๋ค์ํ ์ฐ๊ตฌ๋ค์ด ์งํ๋์ด ์๋ค. ํ์ง๋ง ๋ชธ์ ์์ธ์ ๋ฐ๋ฅธ ์ธ์ง์์ฉ์ด ์กด์ฌ๊ฐ๊ณผ ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ด๋ค ์ํฅ์ ์ฃผ๋์ง์ ๋ํ ์ฐ๊ตฌ๋ ๊ฑฐ์ ์ด๋ฃจ์ด์ง์ง ์์๋ค. ์ด์ ๋ณธ ํ์๋
ผ๋ฌธ์์๋ 1์ธ์นญ๊ณผ 3์ธ์นญ ์์ ์ผ๋ก ๋ถ๋ฅ๋ ๋ณ๋์ ์คํ๊ณผ ์ฐ๊ตฌ๋ฅผ ์งํํ์ฌ ๊ฐ์ํ์ค์์ ๋ชธ์ ์์ธ์ ๊ณต๊ฐ์ธ์ง, ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ, ์กด์ฌ๊ฐ, ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ํธ์ฐ๊ด์ฑ์ ๋ณด๋ค ์ฌ์ธต์ ์ผ๋ก ์ดํดํ๊ณ ์ ํ๋ค.
์ 3์ฅ์์๋ 3์ธ์นญ์์ ์ ์คํ๊ณผ ๊ฒฐ๊ณผ์ ๋ํ ๋ด์ฉ์ ๊ธฐ์ ํ๋ค. 3์ธ์นญ์์ ์คํ์์๋ ๊ฐ์๊ณต๊ฐ์์ ๋ชธ์ ์์ธ์ ์กด์ฌ๊ฐ์ ์ํธ์ฐ๊ด์ฑ ์ฐ๊ตฌ๋ฅผ ์ํด ์ธ๊ฐ์ง ๋ชธ์ ์์ธ (์์๋ ์์ธ, ์์ ์์ธ, ๋ค๋ฆฌ๋ฅผ ํด๊ณ ์์ ์์ธ)์ 2๊ฐ์ง ํ์
์ ๊ณต๊ฐ์ด๋ ์์ ๋ (๋ฌดํ, ์ ํ)๋ฅผ ์ํธ ๋น๊ตํ๋ค. ์คํ๊ฒฐ๊ณผ์ ์ํ๋ฉด ๊ณต๊ฐ์ด๋ ์์ ๋๊ฐ ๋ฌดํํ ๊ฒฝ์ฐ ์์๋ ์์ธ์์ ์กด์ฌ๊ฐ์ด ๋๊ฒ ๋ํ๋ฌ๋ค. ์ถ๊ฐ์ ์ผ๋ก ๊ฐ์๊ณต๊ฐ์์ ๋ชธ์ ์์ธ์ ์กด์ฌ๊ฐ์ ๊ณต๊ฐ์ด๋์์ ๋์ ๊ด๋ จ์ด ์๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ์ผ๋ฉฐ ์ฌ๋ฌ ์ธ์ง๊ธฐ๋ฅ ์ค ์ฃผ์์ง์ค์ด ๋ชธ์ ์์ธ, ์กด์ฌ๊ฐ, ๊ณต๊ฐ์ธ์ง์ ํตํฉ์ ์ํธ์์ฉ์ ์ด๋์ด ๋ธ ๊ฒ์ผ๋ก ํ์
๋์๋ค. 3์ธ์นญ์์ ์ ๊ฒฐ๊ณผ๋ค์ ์ข
ํฉํด ๋ณด๋ฉด ๋ชธ ์์ธ์ ์ธ์ง์ ์ํฅ์ ๊ณต๊ฐ์ด๋์์ ๋์ ์๊ด๊ด๊ณ๊ฐ ์๋ ๊ฒ์ผ๋ก ์ถ์ธกํ ์ ์๋ค.
์ 4์ฅ์์๋ 1์ธ์นญ์์ ์ ์คํ๊ณผ ๊ฒฐ๊ณผ์ ๋ํ ๋ด์ฉ์ ๊ธฐ์ ํ๋ค. 1์ธ์นญ์์ ์คํ์์๋ ๊ฐ์๊ณต๊ฐ์์ ๋ชธ์ ์์ธ, ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ, ์กด์ฌ๊ฐ, ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ํธ์ฐ๊ด์ฑ ์ฐ๊ตฌ๋ฅผ ์ํด ๋ ์กฐ๊ฑด์ ๋ชธ์ ์์ธ (์์๋ ์์ธ, ์์ ์๋ ์์ธ)์ ๋ค๊ฐ์ง ํ์
์ ์ด๋๋ฐฉ๋ฒ (์คํฐ์ด๋ง + ๋ชธ์ ํ์ฉํ ํ์ , ์คํฐ์ด๋ง + ๋๊ตฌ๋ฅผ ํ์ฉํ ํ์ , ํ
๋ ํฌํ
์ด์
+ ๋ชธ์ ์ด์ฉํ ํ์ , ํ
๋ ํฌํ
์ด์
+ ๋๊ตฌ๋ฅผ ํ์ฉํ ํ์ )์ ์ํธ ๋น๊ต๊ฐ ์ด๋ฃจ์ด ์ก๋ค. ์คํ๊ฒฐ๊ณผ์ ์ํ๋ฉด ์์น์ด๋๋ฐฉ์๊ณผ ํ์ ๋ฐฉ์์ ๋ฐ๋ฅธ ๊ณต๊ฐ์ด๋์์ ๋๋ ์ฑ๊ณต์ ์ธ ๋ค๋น๊ฒ์ด์
๊ณผ ๊ด๋ จ์ด ์์ผ๋ฉฐ ์กด์ฌ๊ฐ์ ์ํฅ์ ์ฃผ๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ๋ค. ์ถ๊ฐ์ ์ผ๋ก ์ฐ์์ ์ผ๋ก ์๊ฐ์ ๋ณด๊ฐ ์
๋ ฅ๋๋ ์คํฐ์ด๋ง ๋ฐฉ๋ฒ์ ์๊ฐ์ด๋์ ๋์ฌ ๋น์ฐ์์ ๋ฐฉ๋ฒ์ธ ํ
๋ ํฌํ
์ด์
๋ณด๋ค ์ฌ์ด๋ฒ๋ฉ๋ฏธ๋ฅผ ๋ ์ ๋ฐํ๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ๋ค. 1์ธ์นญ์์ ์ ๊ฒฐ๊ณผ๋ค์ ์ข
ํฉํด ๋ณด๋ฉด ๊ฐ์๊ณต๊ฐ์์ ๋ค๋น๊ฒ์ด์
์ ํ ๋ ์กด์ฌ๊ฐ๊ณผ ์ฌ์ด๋ฒ๋ฉ๋ฏธ๋ ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ๊ณผ ๊ด๋ จ์ด ์๋ ๊ฒ์ผ๋ก ๊ฐ์ ํ ์ ์๋ค.
์ 3์ฅ์ 3์ธ์นญ ์์ ์คํ๊ฒฐ๊ณผ์ ์ํ๋ฉด ๋ชธ์ ์์ธ์ ์กด์ฌ๊ฐ์ ์๊ด๊ด๊ณ๊ฐ ์๋ ๊ฒ์ผ๋ก ์ ์๋์๋ค. ๋ฐ๋ฉด ์ 4์ฅ์ ์คํ๊ฒฐ๊ณผ์ ์ํ๋ฉด 1์ธ์นญ์์ ์ผ๋ก ๊ฐ์๊ณต๊ฐ์ ๋ค๋น๊ฒ์ด์
ํ ๋๋ ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ์ด ์กด์ฌ๊ฐ๊ณผ ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ์ํฅ์ ์ฃผ๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ๋ค. ์ด ๋ ์คํ์ ๋ํ ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ํตํด ๊ฐ์ํ์ค์์ ๋ชธ์ ์์ธ์ ๊ณต๊ฐ์ธ์ง (๋ค๋น๊ฒ์ด์
)์ ์ํธ์ฐ๊ด์ฑ์ ๋ํ ์ดํด๋ฅผ ํ๋ํ๊ณ ์กด์ฌ๊ฐ ๋ฐ ์ฌ์ด๋ฒ๋ฉ๋ฏธ์ ๊ณต๊ฐ์ด๋๋ฐฉ๋ฒ์ ๊ด๋ จ์ฑ์ ๋ฐํ ์ ์์ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๋ค.Immersive virtual environments (VEs) can disrupt the everyday connection between where our senses tell us we are and where we are actually located. In computer-mediated communication, the user often comes to feel that their body has become irrelevant and that it is only the presence of their mind that matters. However, virtual worlds offer users an opportunity to become aware of and explore both the role of the physical body in communication, and the implications of disembodied interactions.
Previous research has suggested that cognitive functions such as execution, attention, memory, and perception differ when body position changes. However, the influence of body position on these cognitive functions is still not fully understood. In particular, little is known about how physical self-positioning may affect the cognitive process of perceptual responses in a VE.
Some researchers have identified presence as a guide to what constitutes an effective virtual reality (VR) system and as the defining feature of VR. Presence is a state of consciousness related to the sense of being within a VE; in particular, it is a โpsychological state in which the virtuality of the experience is unnoticedโ. Higher levels of presence are considered to be an indicator of a more successful media experience, thus the psychological experience of โbeing thereโ is an important construct to consider when investigating the association between mediated experiences on cognition.
VR is known to induce cybersickness, which limits its application and highlights the need for scientific strategies to optimize virtual experiences. Cybersickness refers to the sickness associated with the use of VR systems, which has a range of symptoms including nausea, disorientation, headaches, sweating and eye strain. This is a complicated problem because the experience of cybersickness varies greatly between individuals, the technology being used, the design of the environment, and the task being performed. Thus, avoiding cybersickness represents a major challenge for VR development.
Spatial cognition is an invariable precursor to action because it allows the formation of the necessary mental representations that code the positions of and relationships among objects. Thus, a number of bodily actions are represented mentally within a depicted VR space, including those functionally related to navigation, the manipulation of objects, and/or interaction with other agents. Of these actions, navigation is one of the most important and frequently used interaction tasks in VR environments. Therefore, identifying an efficient locomotion technique that does not alter presence nor cause motion sickness has become the focus of numerous studies.
Though the details of the results have varied, past research has revealed that viewpoint can affect the sense of presence and the sense of embodiment. VR experience differs depending on the viewpoint of a user because this vantage point affects the actions of the user and their engagement with objects. Therefore, it is necessary to investigate the association between body position, spatial cognition, locomotion method, presence, and cybersickness based on viewpoint, which may clarify the understanding of cognitive processes in VE navigation.
To date, numerous detailed studies have been conducted to explore the mechanisms underlying presence and cybersickness in VR. However, few have investigated the cognitive effects of body position on presence and cybersickness. With this in mind, two separate experiments were conducted in the present study on viewpoint within VR (i.e., third-person and first-person perspectives) to further the understanding of the effects of body position in relation to spatial cognition, locomotion method, presence, and cybersickness in VEs.
In Chapter 3 (Experiment 1: third-person perspective), three body positions (standing, sitting, and half-sitting) were compared in two types of VR game with a different degree of freedom in navigation (DFN; finite and infinite) to explore the association between body position and the sense of presence in VEs. The results of the analysis revealed that standing has the most significant effect on presence for the three body positions that were investigated. In addition, the outcomes of this study indicated that the cognitive effect of body position on presence is associated with the DFN in a VE. Specifically, cognitive activity related to attention orchestrates the cognitive processes associated with body position, presence, and spatial cognition, consequently leading to an integrated sense of presence in VR. It can thus be speculated that the cognitive effects of body position on presence are correlated with the DFN in a VE.
In Chapter 4 (Experiment 2: first-person perspective), two body positions (standing and sitting) and four types of locomotion method (steering + embodied control [EC], steering + instrumental control [IC], teleportation + EC, and teleportation + IC) were compared to examine the relationship between body position, locomotion method, presence, and cybersickness when navigating a VE. The results of Experiment 2 suggested that the DFN for translation and rotation is related to successful navigation and affects the sense of presence when navigating a VE. In addition, steering locomotion (continuous motion) increases self-motion when navigating a VE, which results in stronger cybersickness than teleportation (non-continuous motion). Overall, it can be postulated that presence and cybersickness are associated with the method of locomotion when navigating a VE.
In this dissertation, the overall results of Experiment 1 suggest that the cognitive influence of presence is body-dependent in the sense that mental and brain processes rely on or are affected by the physical body. On the other hand, the outcomes of Experiment 2 illustrate the significant effects of locomotion method on the sense of presence and cybersickness during VE navigation. Taken together, the results of this study provide new insights into the cognitive effects of body position on spatial cognition (i.e., navigation) in VR and highlight the important implications of locomotion method on presence and cybersickness in VE navigation.Chapter 1. Introduction 1
1.1. An Introductory Overview of the Conducted Research 1
1.1.1. Presence and Body Position 1
1.1.2. Navigation, Cybersickness, and Locomotion Method 3
1.2. Research Objectives 6
1.3. Research Experimental Approach 7
Chapter 2. Theoretical Background 9
2.1. Presence 9
2.1.1. Presence and Virtual Reality 9
2.1.2. Presence and Spatiality 10
2.1.3. Presence and Action 12
2.1.4. Presence and Attention 14
2.2. Body Position 16
2.2.1. Body Position and Cognitive Effects 16
2.2.2. Body Position and Postural Control 18
2.2.3. Body Position and Postural Stability 19
2.3. Spatial Cognition: Degree of Freedom in Navigation 20
2.3.1. Degree of Freedom in Navigation and Decision-Making 20
2.4. Cybersickness 22
2.4.1. Cybersickness and Virtual Reality 22
2.4.2. Sensory Conflict Theory 22
2.4.3. Postural Instability Theory 23
2.5. Self-Motion 25
2.5.1. Vection and Virtual Reality 25
2.5.2. Self-Motion and Navigation in a VE 27
2.6. Navigation in Virtual Environments 29
2.6.1. Translation and Rotation in Navigation 29
2.6.2. Spatial Orientation and Embodiment 32
2.6.3. Locomotion Methods 37
2.6.4. Steering and Teleportation 38
Chapter 3. Experiment 1: Third-Person Perspective 40
3.1. Quantification of the Degree of Freedom in Navigation 40
3.2. Experiment
3.2.1. Experimental Design and Participants 41
3.2.2. Stimulus Materials 42
3.2.2.1. First- and Third-person Perspectives in Gameplay 43
3.2.3. Experimental Setup and Process 44
3.2.4. Measurements 45
3.3. Results 45
3.3.1. Presence: two-way ANOVA 45
3.3.2. Presence: one-way ANOVA 46
3.3.2.1. Finite Navigation Freedom 46
3.3.2.2. Infinite Navigation Freedom 47
3.3.3. Summary of the Results 48
3.4. Discussion 49
3.4.1. Presence and Body Position 49
3.4.2. Degree of Freedom in Navigation and Decision-Making 50
3.4.3. Gender Difference and Gameplay 51
3.5. Limitations 52
Chapter 4. Experiment 2: First-Person Perspective 53
4.1. Experiment 53
4.1.1. Experimental Design and Participants 53
4.1.2. Stimulus Materials 54
4.1.3. Experimental Setup and Process 55
4.1.4. Measurements 56
4.2. Results 57
4.2.1. Presence: two-way ANOVA 58
4.2.2. Cybersickness: two-way ANOVA 58
4.2.3. Presence: one-way ANOVA 60
4.2.3.1. Standing Position 60
4.2.3.2. Sitting Position 60
4.2.4. Cybersickness: one-way ANOVA 62
4.2.4.1. Standing Position 62
4.2.4.2. Sitting Position 62
4.2.5. Summary of the Results 63
4.3. Discussion 65
4.3.1. Presence
4.3.1.1. Presence and Locomotion Method 66
4.3.1.2. Presence and Body Position 68
4.3.2. Cybersickness
4.3.2.1. Cybersickness and Locomotion Method 69
4.3.2.2. Cybersickness and Body Position 70
4.4. Limitations 71
Chapter 5. Conclusion 72
5.1. Summary of Findings 72
5.2. Future Research Direction 73
References 75
Appendix A 107
Appendix B 110
๊ตญ๋ฌธ์ด๋ก 111Docto
The delta radiance field
The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well.
Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production.
The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand.
Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now
Recommended from our members
Multi-Mobile Computing
With mobile systems evermore ubiquitous, individual users often own multiple mobile systems and groups of users often have many mobile systems at their disposal. As a result, there is a growing demand for multi-mobile computing, the ability to combine the functionality of multiple mobile systems into a more capable one. However, there are several key challenges. First, mobile systems are highly heterogeneous with different software and hardware, each with their own interfaces and data formats. Second, there are no effective ways to allow users to easily and dynamically compose together multiple mobile systems for the quick interactions that typically take place with mobile systems. Finally, there is a lack of system infrastructure to allow existing apps to make use of multiple mobile systems, or to enable developers to write new multi-mobile aware apps. My thesis is that higher-level abstractions of mobile operating systems can be reused to combine heterogeneous mobile systems into a more capable one and enable existing and new apps to provide new functionality across multiple mobile systems.
First, we present M2, a system for multi-mobile computing that enables existing unmodified mobile apps to share and combine multiple devices, including cameras, displays, speakers, microphones, sensors, GPS, and input. To support heterogeneous devices, M2 introduces a new data-centric approach that leverages higher-level device abstractions and hardware acceleration to efficiently share device data, not API calls. M2 introduces device transformation, a new technique to mix and match heterogeneous devices, enabling, for example, existing apps to leverage a single larger display fused from multiple displays for better viewing, or use a Nintendo Wii-like gaming experience by translating accelerometer to touchscreen input. We have implemented M2 and show that it operates across heterogeneous systems, including multiple versions of Android and iOS, and can run existing apps across mobile systems with modest overhead and qualitative performance indistinguishable from using local device hardware.
Second, we present Tap, a framework that leverages M2โs data-centric architecture to make it easy for users to dynamically compose collections of mobile systems and developers to write new multi-mobile apps that make use of those impromptu collections. Tap allows users to simply tap systems together to compose them into a collection without the need for users to register or connect to any cloud infrastructure. Tap makes it possible for apps to use existing mobile platform APIs across multiple mobile systems by virtualizing data sources so that local and remote data sources can be combined together upon tapping. Virtualized data sources can be hardware or software features, including media, clipboard, calendar events, and devices such as cameras and microphones. Leveraging existing mobile platform APIs make it easy for developers to write apps that use hard- ware and software features across dynamically composed collections of mobile systems. We have implemented Tap and show that it provides good usability for dynamically composing multiple mobile systems and good performance for sharing hardware devices and software features across multiple mobile systems.
Finally, using M2 and Tap, we present various apps that show how existing apps can provide useful functionality across multiple mobile systems and how new apps can be easily developed to provide new multi-mobile functionality. Examples include panoramic video recording using cameras from multiple mobile systems, surround sound music player app that configures itself based on automatically detecting the location of multiple mobile systems, and an added feature to the Snapchat app that allows multiple users to share a live Snap, using their own cameras and filters. Our user studies with these apps show that multi-mobile computing offers a richer and more enhanced experience for users and a much simpler development effort for developers
Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection
In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lรถsen. Dies erfordert auch die Berรผcksichtigung verรคnderter Schwerkraftbedingungen
Virtual Reality
At present, the virtual reality has impact on information organization and management and even changes design principle of information systems, which will make it adapt to application requirements. The book aims to provide a broader perspective of virtual reality on development and application. First part of the book is named as "virtual reality visualization and vision" and includes new developments in virtual reality visualization of 3D scenarios, virtual reality and vision, high fidelity immersive virtual reality included tracking, rendering and display subsystems. The second part named as "virtual reality in robot technology" brings forth applications of virtual reality in remote rehabilitation robot-based rehabilitation evaluation method and multi-legged robot adaptive walking in unstructured terrains. The third part, named as "industrial and construction applications" is about the product design, space industry, building information modeling, construction and maintenance by virtual reality, and so on. And the last part, which is named as "culture and life of human" describes applications of culture life and multimedia-technology
ๆฑๅๅคงๅญฆ้ปๆฐ้ไฟก็ ็ฉถๆ็ ็ฉถๆดปๅๅ ฑๅ ็ฌฌ29ๅท(2022ๅนดๅบฆ)
็ด่ฆ้ก๏ผbulletin๏ผdepartmental bulletin pape
Razvoj i implementacija naprednog metoda za unapreฤenje bezbednosti i optimizaciju radnog okruลพenja pri manuelnim operacijama
REZIME
Savremeni industrijski sistemi se mogu opisati pomoฤu napredne
tehnologije i opreme koju koriste, i ฤesto se pogreลกno smatra da je ljudski
uticaj u takvim sistemima u potpunosti eliminisan. Meฤutim, ฤovek je i dalje
kljuฤna komponenta za funkcionisanje sistema, obzirom da automatizovane
linije nisu eliminisale manuelne operacije. Bez obzira na visok nivo
tehnoloลกkog napretka, bezbednost na radu se ne moลพe u potpunosti
zagarantovati. Shodno tome, bezbednost i zdravlje na radu ostaje kljuฤna oblast
za unapreฤenje u svakom radnom okruลพenju. Ergonomija je oblast nauke koji se
bavi optimizacijom radnih aktivnosti i ima za cilj smanjivanje broja oboljenja u
vezi sa radom. Veฤina tehnika i metoda za identifikaciju zdravstvenih
problema u vezi sa radom i optimizacije radnog okruลพenja se zasniva na praฤenju
sprovoฤenja radnih aktivnosti i izveลกtavanju o incidentima/akcidentima. Dok,
metode za izbegavanje ljudskih greลกaka se zasnivaju na obukama i odgovarajuฤem
obrazovanju iz oblasti bezbednosti i zdravlja na radu.
Praฤenje sprovoฤenja radnih aktivnosti je subjektivna metoda, a izveลกtavanje o
incidentima/akcidentima moลพe biti veoma znaฤajan parametar za razumevanje
problema na radnom mestu. Ipak, ฤinjenice i rezultati ovim metodama se ne
dobijaju u realnom vremenu ลกto moลพe predstavljati znaฤajno i vaลพno pitanje za
bezbednost na radnom mestu. Sa druge strane, metode za obuku iz bezbednosti i
zdravlja na radu, iako je skoro sve vreme neophodna, ne spreฤava u potpunosti
nastanak incidenata/akcidenata i oboljenja na radnom mestu, ลกto ukazuje na
veliki prostor za unapreฤenje u ovoj oblasti.
Ova doktorska disertacija istraลพuje moguฤnosti primene elektrofizioloลกkih
merenja kako bi se zabeleลพilo kognitivno stanje radnika paralelno
primenjivajuฤi postojeฤe standardizovane metode u oblasti ergonomije. Fokus
ove doktorske disertacije jeste na upotrebljivosti elektordermalne aktivnosti
tokom sprovoฤenja tri tipa istraลพivanja i to: dva su vezana za sprovoฤenje
ponavljajuฤih radnih aktivnosti i treฤi se odnosi na obuku iz oblasti
bezbednosti i zdravlja na radu.ABSTRACT
Modern industrial environment can be described by advanced technology and
machinery and many believe that human operational tasks are almost eliminated.
However, people are still the key component of the processes whilst automation has not
eliminated manual operations. Safety in the working place cannot be guaranteed, despite
technological advances; accordingly, Occupational Health and Safety remains a crucial
sector for improvement in every working environment. Ergonomics deal with the
optimization of work design and aims to minimize occupational issues. Most of the
techniques for the identification of occupational issues and optimization of working
environment are based on task observation and recording incidents. While methods for
avoidance of human error are based on training and appropriate safety education.
Task observation is a subjective method, and recording of incidents can be an important
kind of metrics for understanding the workplace defects. Nevertheless, these methods
do not capture the facts in real time and this can be an important issue for the workplace
safety. On the other hand, methods of safety training, although are most of the times
necessary, do not really prevent from accidents or occupational health problems and still
remain an open sector of improvement.
This thesis investigates the possibility of electrophysiological recording for the purpose
to capture operatorsโ cognition in parallel with application of ergonomic standardized
methods. The study focuses on the usefulness of Electrodermal activity during three
caseโstudies; two of them represent common working tasks of low risk, and one is
based on safety training methods.
Electrodermal activity has demonstrated to relate discomfort and cognitive status to
specific biosignals, therefore, it can be a valid tool for deeply understanding the status
of the operators, while they perform cognitive or physical tasks. The core of
Electrodermal activity usefulness is that it is a mechanism of sympathetic nervous
system and can be used as an index of autonomic reaction to emotions
Distanzabhรคngige Interaktion in groรen hochauflรถsenden Displayumgebungen
Das Ziel der Arbeit ist die Entwicklung von Methoden, die den Nutzer bei seiner Tรคtigkeit in groรen hochauflรถsenden Displayumgebungen unterstรผtzen, indem die Visualisierung und die Interaktivitรคt an den aktuellen Betrachtungsabstand angepasst werden. Das vorgestellte Interaction Scaling (IS) verwendet die physische Navigation fรผr die Anpassung, indem die Berechnung eines distanzabhรคngigen Mappings mit automatischem/manuellem Wechsel der Prรคzisionsstufen kombiniert wird. In Studien wird aufgezeigt, dass IS fรผr 2D Manipulation die Nutzerperformanz verbessert, wenn die benรถtigte Prรคzision steigt