4 research outputs found
InFusionSurf: Refining Neural RGB-D Surface Reconstruction Using Per-Frame Intrinsic Refinement and TSDF Fusion Prior Learning
We introduce InFusionSurf, a novel approach to enhance the fidelity of neural
radiance field (NeRF) frameworks for 3D surface reconstruction using RGB-D
video frames. Building upon previous methods that have employed feature
encoding to improve optimization speed, we further improve the reconstruction
quality with minimal impact on optimization time by refining depth information.
Our per-frame intrinsic refinement scheme addresses frame-specific blurs caused
by camera motion in each depth frame. Furthermore, InFusionSurf utilizes a
classical real-time 3D surface reconstruction method, the truncated signed
distance field (TSDF) Fusion, as prior knowledge to pretrain the feature grid
to support reconstruction details while accelerating the training. The
quantitative and qualitative experiments comparing the performances of
InFusionSurf against prior work indicate that our method is capable of
accurately reconstructing a scene without sacrificing optimization speed. We
also demonstrate the effectiveness of our per-frame intrinsic refinement and
TSDF Fusion prior learning techniques via an ablation study
Large-scale Text-to-Image Generation Models for Visual Artists' Creative Works
Large-scale Text-to-image Generation Models (LTGMs) (e.g., DALL-E),
self-supervised deep learning models trained on a huge dataset, have
demonstrated the capacity for generating high-quality open-domain images from
multi-modal input. Although they can even produce anthropomorphized versions of
objects and animals, combine irrelevant concepts in reasonable ways, and give
variation to any user-provided images, we witnessed such rapid technological
advancement left many visual artists disoriented in leveraging LTGMs more
actively in their creative works. Our goal in this work is to understand how
visual artists would adopt LTGMs to support their creative works. To this end,
we conducted an interview study as well as a systematic literature review of 72
system/application papers for a thorough examination. A total of 28 visual
artists covering 35 distinct visual art domains acknowledged LTGMs' versatile
roles with high usability to support creative works in automating the creation
process (i.e., automation), expanding their ideas (i.e., exploration), and
facilitating or arbitrating in communication (i.e., mediation). We conclude by
providing four design guidelines that future researchers can refer to in making
intelligent user interfaces using LTGMs.Comment: 15 pages, 3 figure
ARphy: Managing photo collections using physical objects in AR
© 2020 Owner/Author.ARphy is a tangible interface that extends current ways of organizing photo collections by enabling people to interact with digital photos using physical objects in Augmented Reality. ARphy contextually connects photos with real objects and utilizes physical affordances so that people can add more meanings to their collections and interact with them naturally. We also created an ARphy Interaction Design Toolkit, which can add ARphy-compatible interactions to any object, so that people can register their own things for organizing collections. We developed a prototype using seven everyday objects and evaluated ARphy through a qualitative user study. Our findings indicate that ARphy is intuitive, immersive, and enjoyable and has the potential for selectively managing collections using photos and objects that have personal meanings.N
An Artificial Intelligence Exercise Coaching Mobile App: Development and Randomized Controlled Trial to Verify Its Effectiveness in Posture Correction
BackgroundInsufficient physical activity due to social distancing and suppressed outdoor activities increases vulnerability to diseases like cardiovascular diseases, sarcopenia, and severe COVID-19. While bodyweight exercises, such as squats, effectively boost physical activity, incorrect postures risk abnormal muscle activation joint strain, leading to ineffective sessions or even injuries. Avoiding incorrect postures is challenging for novices without expert guidance. Existing solutions for remote coaching and computer-assisted posture correction often prove costly or inefficient.
ObjectiveThis study aimed to use deep neural networks to develop a personal workout assistant that offers feedback on squat postures using only mobile devicesâsmartphones and tablets. Deep learning mimicked expertsâ visual assessments of proper exercise postures. The effectiveness of the mobile app was evaluated by comparing it with exercise videos, a popular at-home workout choice.
MethodsTwenty participants were recruited without squat exercise experience and divided into an experimental group (EXP) with 10 individuals aged 21.90 (SD 2.18) years and a mean BMI of 20.75 (SD 2.11) and a control group (CTL) with 10 individuals aged 22.60 (SD 1.95) years and a mean BMI of 18.72 (SD 1.23) using randomized controlled trials. A data set with over 20,000 squat videos annotated by experts was created and a deep learning model was trained using pose estimation and video classification to analyze the workout postures. Subsequently, a mobile workout assistant app, Home Alone Exercise, was developed, and a 2-week interventional study, in which the EXP used the app while the CTL only followed workout videos, showed how the app helps people improve squat exercise.
ResultsThe EXP significantly improved their squat postures evaluated by the app after 2 weeks (Pre: 0.20 vs Mid: 4.20 vs Post: 8.00, P=.001), whereas the CTL (without the app) showed no significant change in squat posture (Pre: 0.70 vs Mid: 1.30 vs Post: 3.80, P=.13). Significant differences were observed in the left (Pre: 75.06 vs Mid: 76.24 vs Post: 63.13, P=.02) and right (Pre: 71.99 vs Mid: 76.68 vs Post: 62.82, P=.03) knee joint angles in the EXP before and after exercise, with no significant effect found for the CTL in the left (Pre: 73.27 vs Mid: 74.05 vs Post: 70.70, P=.68) and right (Pre: 70.82 vs Mid: 74.02 vs Post: 70.23, P=.61) knee joint angles.
ConclusionsEXP participants trained with the app experienced faster improvement and learned more nuanced details of the squat exercise. The proposed mobile app, offering cost-effective self-discovery feedback, effectively taught users about squat exercises without expensive in-person trainer sessions.
Trial RegistrationClinical Research Information Service KCT0008178 (retrospectively registered); https://cris.nih.go.kr/cris/search/detailSearch.do/2400