4 research outputs found
Illuminating palaeolithic art using virtual reality: A new method for integrating dynamic firelight into interpretations of art production and use
Approaches to Palaeolithic art have increasingly shifted beyond the traditional focus on engraved or depicted forms in isolation, to appreciating the sensorial experience of art making as integral to shaping the form of depictions and the meaning imbued within them. This kind of research appreciates an array of factors pertinent to how the art may have been understood or experienced by people during the Palaeolithic, including placement, lighting, accessibility, sound, and tactility. This paper contributes to this “sensory turn” in Palaeolithic art research, arguing that the roving light cast by the naked flame of fires, torches or lamps is an important dimension in understanding artistic experiences. However, capturing these effects, whether during analysis, as part of interpretation, or presentation, can be challenging. A new method is presented in virtual reality (VR) modelling – applied to Palaeolithic art contexts for the first time - as a safe and non-destructive means of simulating dynamic light sources to facilitate analysis, interpretation, and presentation of Palaeolithic art under actualistic lighting conditions. VR was applied to two Magdalenian case studies: parietal art from Las Monedas (Spain) and portable stone plaquettes from Montastruc (France). VR models were produced using Unity software and digital models of the art captured via whitelight (Montastruc) and photogrammetric (Las Monedas) scans. The results demonstrate that this novel application of VR facilitates the testing of hypotheses related to the sensorial and experiential dimensions of Palaeolithic art, allowing discussions of these elements to be elevated beyond theoretical ideas
Distance mis-estimations can be reduced with specific shadow locations
Shadows in physical space are copious, yet the impact of specific shadow placement and their abundance is yet to be determined in virtual environments. This experiment aimed to identify whether a target’s shadow was used as a distance indicator in the presence of binocular distance cues. Six lighting conditions were created and presented in virtual reality for participants to perform a perceptual matching task. The task was repeated in a cluttered and sparse environment, where the number of cast shadows (and their placement) varied. Performance in this task was measured by the directional bias of distance estimates and variability of responses. No significant difference was found between the sparse and cluttered environments, however due to the large amount of variance, one explanation is that some participants utilised the clutter objects as anchors to aid them, while others found them distracting. Under-setting of distances was found in all conditions and environments, as predicted. Having an ambient light source produced the most variable and inaccurate estimates of distance, whereas lighting positioned above the target reduced the mis-estimation of distances perceived
Nuevas evidencias sobre la anisotropía del espacio visual y la influencia del entorno en el rendimiento visual
Desde hace años se han realizado múltiples
investigaciones que tratan sobre la
fiabilidad de la percepción de las
distancias en profundidad y la influencia
del entorno. En ocasiones, las pistas
presentes en el espacio visual entran en
conflicto y/o su interpretación conduce a
sesgos y errores, produciendo efectos
ilusorios al no corresponder las medidas
físicas con las percibidas.
El objetivo global de esta tesis doctoral ha
sido poner de manifiesto nuevas evidencias
de la anisotropía del espacio visual y de la
influencia del entorno en relación al juicio
de distancias entre objetos. Para ello, se
han diseñado y ejecutado tres grupos de
experimentos cuyos objetivos parciales
fueron:
Analizar el rol de las disparidades
verticales en tareas de juicio de
distancias relativas entre objetos
situados en diferentes planos de
profundidad, en función de la
orientación del estímulo.
Determinar la influencia del fondo
y de la orientación del estímulo en
tareas de juicio de distancias
relativas entre objetos, situados
en un mismo plano frontoparalelo
o en diferentes planos de
profundidad.
Verificar la naturaleza neural de la
anisotropía del espacio visual
desde un enfoque psicofísico no
invasivo mediante el uso de SIRDS.
Los resultados obtenidos en esta tesis
contribuyen a la comprensión sobre la
integración de claves en Visión Binocular y
los sesgos visuales perceptivos en la
percepción de distancias.Recent decades have witnessed multiple
studies investigating the accuracy of our
visual system in depth perception, as well
as the influence of environmental factors
during depth judgment tasks. It is not
uncommon for Virtual Space cues to offer
contradictory or conflicting information,
thus leading to bias and error, which in
turn originate illusory effects resulting
from discrepancies between real and
perceived dimensions.
Our research in the integration of cues in
Binocular Vision aimed at determining the
influence of background characteristics
(curved, flat, etc) on the perception of
visual stimuli presented over it. Therefore,
three different experimental settings were
designed and conducted, with partial
objectives defined as follows:
To evaluate the influence of
vertical disparities in depth
judgment tasks when stimuli were
located at different depth planes
and presented different
orientations.
To determine the influence of
background configuration and
stimulus orientation on depth
judgment tasks, both with stimuli
at the same frontoparallel plane or
at different depth planes.
To evidence the neural origin of
the Visual Space anisotropy from a
non invasive psychophysical
approach with the use of SIRDS.
The findings of the present PhD thesis
contribute to our understanding of the
integration of cues in Binocular Vision, as
well as of the nature of visual bias in
depth perception
A Unified Cognitive Model of Visual Filling-In Based on an Emergic Network Architecture
The Emergic Cognitive Model (ECM) is a unified computational model of visual filling-in based on the Emergic Network architecture. The Emergic Network was designed to help realize systems undergoing continuous change. In this thesis, eight different filling-in phenomena are demonstrated under a regime of continuous eye movement (and under static eye conditions as well).
ECM indirectly demonstrates the power of unification inherent with Emergic Networks when cognition is decomposed according to finer-grained functions supporting change. These can interact to raise additional emergent behaviours via cognitive re-use, hence the Emergic prefix throughout. Nevertheless, the model is robust and parameter free. Differential re-use occurs in the nature of model interaction with a particular testing paradigm.
ECM has a novel decomposition due to the requirements of handling motion and of supporting unified modelling via finer functional grains. The breadth of phenomenal behaviour covered is largely to lend credence to our novel decomposition.
The Emergic Network architecture is a hybrid between classical connectionism and classical computationalism that facilitates the construction of unified cognitive models. It helps cutting up of functionalism into finer-grains distributed over space (by harnessing massive recurrence) and over time (by harnessing continuous change), yet simplifies by using standard computer code to focus on the interaction of information flows. Thus while the structure of the network looks neurocentric, the dynamics are best understood in flowcentric terms. Surprisingly, dynamic system analysis (as usually understood) is not involved. An Emergic Network is engineered much like straightforward software or hardware systems that deal with continuously varying inputs. Ultimately, this thesis addresses the problem of reduction and induction over complex systems, and the Emergic Network architecture is merely a tool to assist in this epistemic endeavour.
ECM is strictly a sensory model and apart from perception, yet it is informed by phenomenology. It addresses the attribution problem of how much of a phenomenon is best explained at a sensory level of analysis, rather than at a perceptual one. As the causal information flows are stable under eye movement, we hypothesize that they are the locus of consciousness, howsoever it is ultimately realized