54 research outputs found

    A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution

    Get PDF
    Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.</jats:p

    High Frame Rates and the Visibility of Motion Artifacts

    Get PDF

    Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views

    Full text link
    Neural view synthesis (NVS) is one of the most successful techniques for synthesizing free viewpoint videos, capable of achieving high fidelity from only a sparse set of captured images. This success has led to many variants of the techniques, each evaluated on a set of test views typically using image quality metrics such as PSNR, SSIM, or LPIPS. There has been a lack of research on how NVS methods perform with respect to perceived video quality. We present the first study on perceptual evaluation of NVS and NeRF variants. For this study, we collected two datasets of scenes captured in a controlled lab environment as well as in-the-wild. In contrast to existing datasets, these scenes come with reference video sequences, allowing us to test for temporal artifacts and subtle distortions that are easily overlooked when viewing only static images. We measured the quality of videos synthesized by several NVS methods in a well-controlled perceptual quality assessment experiment as well as with many existing state-of-the-art image/video quality metrics. We present a detailed analysis of the results and recommendations for dataset and metric selection for NVS evaluation

    Validation of Clinical Tests Used to Identify Patients Who Would Benefit From Trunk Stabilization Exercises: Preliminary Steps to Refine Test Interpretation and Improve Intervention Prescription

    Get PDF
    Low back pain (LBP) presents a challenge in rehabilitation due to its heterogeneous presentation across patients. However, trunk stabilization exercises have been identified to be successful in patients that meet specific clinical prediction rules. Identifying mechanisms that underlie the tests used in the clinical prediction rules may aid in better understanding impairments in these patients. This may aid in refining intervention selection and prescription. The purpose of this dissertation was to identify mechanisms underlying clinical tests that are used to predict a patient’s success with trunk stabilization exercises: aberrant movements observed during forward bending and the prone instability test. The aims were to: 1) characterize lumbar extensor muscle neuromuscular control during active forward bending and the prone instability test (PIT); 2) validate clinical assumptions of the role that impaired lumbar multifidus muscle activity has in aberrant movements patterns during a forward bend task and a positive prone instability test. Aim 1 results revealed that all trunk extensors are activated to a greater extent in those with aberrant forward bending. However, the lumbar multifidus provided the greatest contribution. In the prone instability test, muscle activity during the leg raising portion of the test resulted in a significant increase in spinal stiffness and reduction in pain. However, participants with LBP had greater reliance on fewer muscle synergies that involved dominance of extrinsic muscles compared to participants without LBP. Aim 2 results revealed that a positive prone instability test with pain reduction and spinal stiffness increase could be yielded in participants with LBP through electrical stimulation of the lumbar multifidus. However, electrical stimulation driven fatigue to the muscle was not able to produce aberrant movement in individuals without LBP. Adaptations in neuromuscular control during forward bending and the prone instability test in individuals with LBP suggest that exercises that include movement control and coordination may be necessary within the intervention.Ph.D., Rehabilitation Sciences -- Drexel University, 201

    A Programmable Display Layer for Virtual Reality System Architectures

    Full text link

    Perception of Color Break-Up

    Get PDF
    Hintergrund. Ein farbverfälschender Bildfehler namens Color Break-Up (CBU) wurde untersucht. Störende CBU-Effekte treten auf, wenn Augenbewegungen (z.B. Folgebewegungen oder Sakkaden) während der Content-Wiedergabe über sogenannte Field-Sequential Color (FSC) Displays oder Projektoren ausgeführt werden. Die Ursache für das Auftreten des CBU-Effektes ist die sequenzielle Anzeige der Primärfarben über das FSC-System. Methoden. Ein kombiniertes Design aus empirischer Forschung und theoretischer Modellierung wurde angewendet. Mittels empirischer Studien wurde der Einfluss von hardware-, content- und betrachterbasierten Faktoren auf die CBU-Wahrnehmung der Stichprobe untersucht. Hierzu wurden zunächst Sehleistung (u. a. Farbsehen), Kurzzeitzustand (u. a. Aufmerksamkeit) und Persönlichkeitsmerkmale (u. a. Technikaffinität) der Stichprobe erfasst. Anschließend wurden die Teilnehmenden gebeten, die wahrgenommene CBU-Intensität verschiedener Videosequenzen zu bewerten. Die Sequenzen wurden mit einem FSC-Projektor wiedergegeben. Das verwendete Setup ermöglichte die Untersuchung folgender Variablen: die Größe (1.0 bis 6.0°) und Leuchtdichte (10.0 bis 157.0 cd/m2) des CBU-provozierenden Contents, das Augenbewegungsmuster des Teilnehmenden (Geschwindigkeit der Folgebewegung: 18.0 bis 54.0 °/s; Amplitude der Sakkade: 3.6 bis 28.2°), die Position der Netzhautstimulation (0.0 bis 50.0°) und die Bildrate des Projektors (30.0 bis 420.0 Hz). Korrelationen zwischen den unabhängigen Variablen und der subjektiven CBU-Wahrnehmung wurden getestet. Das ergänzend entwickelte Modell prognostiziert die CBU-Wahrnehmung eines Betrachters auf theoretische Weise. Das Modell rekonstruiert die Intensitäts- und Farbeigenschaften von CBU-Effekten zunächst grafisch. Anschließend wird die visuelle CBU-Rekonstruktion zu repräsentativen Modellindizes komprimiert, um das modellierte Szenario mit einem handhabbaren Satz von Metriken zu quantifizieren. Die Modellergebnisse wurden abschließend mit den empirischen Daten verglichen. Ergebnisse. Die hohe interindividuelle CBU-Variabilität innerhalb der Stichprobe lässt sich nicht durch die Sehleistung, den Kurzzeitzustand oder die Persönlichkeitsmerkmale eines Teilnehmenden erklären. Eindeutig verstärkende Bedingungen der CBU-Wahrnehmung sind: (1) eine foveale Position des CBU-Stimulus, (2) eine reduzierte Stimulusgröße während Sakkaden, (3) eine hohe Bewegungsgeschwindigkeit des Auges und (4) eine niedrige Bildrate des Projektors (Korrelation durch Exponentialfunktion beschreibbar, r2 > .93). Die Leuchtdichte des Stimulus wirkt sich nur geringfügig auf die CBU-Wahrnehmung aus. Generell hilft das Modell, die grundlegenden Prozesse der CBU-Genese zu verstehen, den Einfluss von CBU-Determinanten zu untersuchen und ein Klassifizierungsschema für verschiedene CBU-Varianten zu erstellen. Das Modell prognostiziert die empirischen Daten innerhalb der angegebenen Toleranzbereiche. Schlussfolgerungen. Die Studienergebnisse ermöglichen die Festlegung von Bildraten und Eigenschaften des CBU-provozierenden Content (Größe und Position), die das Überschreiten vordefinierter, störender CBU-Grenzwerte vermeiden. Die abgeleiteten Hardwareanforderungen und Content-Empfehlungen ermöglichen ein praxisnahes und evidenzbasiertes CBU-Management. Für die Vorhersage von CBU kann die Modellgenauigkeit weiter verbessert werden, indem Merkmale der menschlichen Wahrnehmung berücksichtigt werden, z.B. die exzentrizitätsabhängige Netzhautempfindlichkeit oder Änderungen der visuellen Wahrnehmung bei unterschiedlichen Arten von Augenbewegungen. Zur Modellierung dieser Merkmale können teilnehmerbezogene Daten der empirischen Forschung herangezogen werden.Background. A color-distorting artifact called Color Break-Up (CBU) has been investigated. Disturbing CBU effects occur when eye movements (e.g., pursuits or saccades) are performed during the presentation of content on Field-Sequential Color (FSC) display or projection systems where the primary colors are displayed sequentially rather than simultaneously. Methods. A mixed design of empirical research and theoretical modeling was used to address the main research questions. Conducted studies evaluated the impact of hardware-based, content-based, and viewer-based factors on the sample’s CBU perception. In a first step, visual performance parameters (e.g., color vision), short-term state (e.g., attention level), and long-term personality traits (e.g., affinity to technology) of the sample were recorded. Participants were then asked to rate the perceived CBU intensity for different video sequences presented by a FSC-based projector. The applied setup allowed the size of the CBU-provoking content (1.0 to 6.0°), its luminance level (10.0 to 157.0 cd/m2), the participant’s eye movement pattern (pursuits: 18.0 to 54.0 deg/s; saccadic amplitudes: 3.6 to 28.2°), the position of retinal stimulation (0.0 to 50.0°), and the projector’s frame rate (30.0 to 420.0 Hz) to be varied. Correlations between independent variables and subjective CBU perception were tested. In contrast, the developed model predicts a viewer’s CBU perception on an objective basis. The model graphically reconstructs the intensity and color characteristics of CBU effects. The visual CBU reconstruction is then compressed into representative model indices to quantify the modeled scenario with a manageable set of metrics. Finally, the model output was compared to the empirical data. Results. High interindividual CBU variability within the sample cannot be explained by a participant’s visual performance, short-term state or long-term personality traits. Conditions that distinctly elevate the participant’s CBU perception are (1) a foveal stimulus position on the retina, (2) a small-sized stimulus during saccades, (3) a high eye movement velocity, and (4) a low frame rate of the projector (correlation expressed by exponential function, r2 > .93). The stimulus luminance, however, only slightly affects CBU perception. In general, the model helps to understand the fundamental processes of CBU genesis, to investigate the impact of CBU determinants, and to establish a classification scheme for different CBU variants. The model adequately predicts the empirical data within the specified tolerance ranges. Conclusions. The study results allow the determination of frame rates and content characteristics (size and position) that avoid exceeding predefined annoyance thresholds for CBU perception. The derived hardware requirements and content recommendations enable practical and evidence-based CBU management. For CBU prediction, model accuracy can be further improved by considering features of human perception, e.g., eccentricity-dependent retinal sensitivity or changes in visual perception with different types of eye movements. Participant-based data from the empirical research can be used to model these features

    Using Two Simulation Tools to Teach Concepts in Introductory Astronomy: A Design-Based Research Approach

    Full text link
    Technology in college classrooms has gone from being an enhancement to the learning experience to being something expected by both instructors and students. This design-based research investigation takes technology one step further, putting the tools used to teach directly in the hands of students. The study examined the affordances and constraints of two simulation tools for use in introductory astronomy courses. The variety of experiences participants had using two tools; a virtual reality headset and fulldome immersive planetarium simulation, to manipulate a lunar surface flyby were identified using a multi-method research approach with N = 67 participants. Participants were recruited from classes of students taking astronomy over one academic year at a two-year college. Participants manipulated a lunar flyby using a virtual reality headset and a motion sensor device in the college fulldome planetarium. Data were collected in the form of two post-treatment questionnaires using Likert-type scales and one small group interview. The small group interview was intended to elicit various experiences participants had using the tools. Responses were analyzed quantitatively for optimal flyby speed and qualitatively for salient themes using data reduction informed by a methodological framework of phenomenography to identify the variety of experiences participants had using the tools. Findings for optimal flyby speed of the Moon based on analysis of data for both the Immersion Questionnaire and the Simulator Sickness Questionnaire done using SPSS software determine that the optimal flyby speed for college students to manipulate the Moon was calculated to be .04 x the radius of the Earth (3,959 miles) or 160 miles per second. A variety of different participant experiences were revealed using MAXQDA software to code positive and negative remarks participants had when engaged in the use of each tool. Both tools offer potential to actively engage students with astronomy content in college lecture and laboratory courses
    • …
    corecore