22 research outputs found
Ses Sembolizmi ve Nesne Algısı İlişkisine Dair Bir İnceleme
Ses sembolizmi, sözcüklerin ses özellikleri ve anlamları arasında rastgele olmayan bir ilişki olduğuyla ilgilidir. Bu konuda nesnelerle ilgili yapılan çalışmalarda, ses frekansının algılanan şekiller ile olan sistematik ilişkis i sıklıkla araştırılmıştır. İnsanlar daha yüksek frekanslı sesleri açısal şekillerle ve düşük frekanslı sesleri yuvarlak şekillerle ilişkilendirir ve bu da Buba/Kiki etkisi olarak bilinir. Buna paralel olarak /i/ sesinin genellikle küçük, /a/ sesinin ise büyüklük özellikleriyle ilişkilendirildiğ i bulunmuştur. Dolayısıyla ses sembolizmi boyut algısında da görülmektedir. Ses, sembolik kelimelerin bir alt grubu olan yansıma kelimeler tanımladıkları malzemelerin sesini taklit eder. Japoncada bu kelimeler, diğer duyusal deneyimlerin yanı sıra yüzey malzemesi niteliklerin in algılanmasıyla da bağlantılandırılmıştır. Japoncaya benzer şekilde, Türkçe de günlük konuşmalarda sıkça kullanılan ses sembolik kelimeler bakımından oldukça zengindir (örneğin; şap şap, tıkır tıkır). Bu yazıda öncelikle Buba/Kiki etkisi etrafında alanyazındaki ses-şekil ilişkilendirmelerine örnekler verilecektir. Daha sonra ses-boyut ilişkisine değinilecektir. Son olarak ses sembolizminin nesnelerin sadece şekil veya boyutlarıyla ilgili değil, malzemeleriy le de ilgili olduğunu gösteren çalışmalara yer verilecektir. Türkçedeki kısıtlı örneklere değinilerek yazı sonlandırılacaktır
Selectively manipulating softness perception of materials through sound symbolism
Cross-modal interactions between auditory and haptic perception manifest themselves in language, such as sound symbolic words: crunch, splash, and creak. Several studies have shown strong associations between sound symbolic words, shapes (e.g., Bouba/Kiki effect), and materials. Here, we identified these material associations in Turkish sound symbolic words and then tested for their effect on softness perception. First, we used a rating task in a semantic differentiation method to extract the perceived softness dimensions from words and materials. We then tested whether Turkish onomatopoeic words can be used to manipulate the perceived softness of everyday materials such as honey, silk, or sand across different dimensions of softness. In the first preliminary study, we used 40 material videos and 29 adjectives in a rating task with a semantic differentiation method to extract the main softness dimensions. A principal component analysis revealed seven softness components, including Deformability, Viscosity, Surface Softness, and Granularity, in line with the literature. The second preliminary study used 27 onomatopoeic words and 21 adjectives in the same rating task. Again, the findings aligned with the literature, revealing dimensions such as Viscosity, Granularity, and Surface Softness. However, no factors related to Deformability were found due to the absence of sound symbolic words in this category. Next, we paired the onomatopoeic words and material videos based on their associations with each softness dimension. We conducted a new rating task, synchronously presenting material videos and spoken onomatopoeic words. We hypothesized that congruent word-video pairs would produce significantly higher ratings for dimension-related adjectives, while incongruent word-video pairs would decrease these ratings, and the ratings of unrelated adjectives would remain the same. Our results revealed that onomatopoeic words selectively alter the perceived material qualities, providing evidence and insight into the cross-modality of perceived softness
Perceptual learning of second order cues for layer decomposition.
Luminance variations are ambiguous: they can signal changes in surface reflectance or changes in illumination. Layer decomposition-the process of distinguishing between reflectance and illumination changes-is supported by a range of secondary cues including colour and texture. For an illuminated corrugated, textured surface the shading pattern comprises modulations of luminance (first order, LM) and local luminance amplitude (second-order, AM). The phase relationship between these two signals enables layer decomposition, predicts the perception of reflectance and illumination changes, and has been modelled based on early, fast, feed-forward visual processing (Schofield et al., 2010). However, while inexperienced viewers appreciate this scission at long presentation times, they cannot do so for short presentation durations (250 ms). This might suggest the action of slower, higher-level mechanisms. Here we consider how training attenuates this delay, and whether the resultant learning occurs at a perceptual level. We trained observers to discriminate the components of plaid stimuli that mixed in-phase and anti-phase LM/AM signals over a period of 5 days. After training, the strength of the AM signal needed to differentiate the plaid components fell dramatically, indicating learning. We tested for transfer of learning using stimuli with different spatial frequencies, in-plane orientations, and acutely angled plaids. We report that learning transfers only partially when the stimuli are changed, suggesting that benefits accrue from tuning specific mechanisms, rather than general interpretative processes. We suggest that the mechanisms which support layer decomposition using second-order cues are relatively early, and not inherently slow
Perceptual integration for qualitatively different 3-D cues in the human brain.
The visual system's flexibility in estimating depth is remarkable: We readily perceive 3-D structure under diverse conditions from the seemingly random dots of a "magic eye" stereogram to the aesthetically beautiful, but obviously flat, canvasses of the Old Masters. Yet, 3-D perception is often enhanced when different cues specify the same depth. This perceptual process is understood as Bayesian inference that improves sensory estimates. Despite considerable behavioral support for this theory, insights into the cortical circuits involved are limited. Moreover, extant work tested quantitatively similar cues, reducing some of the challenges associated with integrating computationally and qualitatively different signals. Here we address this challenge by measuring fMRI responses to depth structures defined by shading, binocular disparity, and their combination. We quantified information about depth configurations (convex "bumps" vs. concave "dimples") in different visual cortical areas using pattern classification analysis. We found that fMRI responses in dorsal visual area V3B/KO were more discriminable when disparity and shading concurrently signaled depth, in line with the predictions of cue integration. Importantly, by relating fMRI and psychophysical tests of integration, we observed a close association between depth judgments and activity in this area. Finally, using a cross-cue transfer test, we found that fMRI responses evoked by one cue afford classification of responses evoked by the other. This reveals a generalized depth representation in dorsal visual cortex that combines qualitatively different information in line with 3-D perception
Estimation of 3D shape from shading and binocular disparity
How does the visual system make use of various sources of information to the three-dimensional (3D) geometry of the world? To infer distances in a 3D scene, the brain uses multiple cues such as binocular disparity, which provides metric estimates of depth; or shading, which is inherently ambiguous and requires additional interpretation. In this thesis, I use psychophysical and functional magnetic resonance imaging (fMRI) techniques to address the following questions: (i) how does the visual system resolve ambiguities in a luminance signal, especially, separating shading cues to shape from luminance variations caused by the changes in the surface material, (ii) when both shading and binocular disparity are available, how do these cues interact to produce a coherent 3D shape estimate, (iii) what is the neural substrate to this cue integration?
First, in Chapter 3, I examine how first- and second-order luminance signals in a luminance pattern are perceived, and ask if observers can benefit from the phase relationship of these signals as a cue to shape. Next, in Chapter 4, I ask whether decomposing shading and reflectance cues to infer shape can be done in very short presentation times. In Chapter 5, I present evidence that the involvement of V3B/KO in 3D shape processing can be extended to disparity and shading signals. Moreover, I find a distinct relation between neural activity in this cortical area and perceptual judgments of individual observers. Finally, in Chapter 6, I carry on investigating cue integration to gain further insight onto the individual variations
Yüz algısında simetrinin etkisinin ölçülmesi.
Facial symmetry has been a central component in many studies on face perception. The relationship between bilateral symmetry and subjective judgments on faces is still arguable in the literature. In this study, a database of natural looking face images with different levels of symmetry is constructed using several digital preprocessing and morphing methods. Our aim is to investigate the correlations between quantified asymmetry, perceived symmetry and a subjective judgment: ‘attractiveness’. Images in the METU-Face Database are built to represent three levels of symmetry (original, intermediate, and symmetrical) within five classes which also represent the orientation of bilateral symmetry: left versus right. In addition, the asymmetry of original images is quantified using a landmark-based method. Based on the theory of holistic face perception, we introduce a novel method to quantify facial asymmetry wholesomely: Entropybased quantification. In Experiment 1 and 2, images were rated on attractiveness judgments and on perceived symmetry, respectively. Results indicate that landmark-based quantifications were not sufficient to account for perceived symmetry ratings (SRs), but they revealed that as the vertical deviation of the symmetry decreases, attractiveness rating (AR) collected from that face increases. Moreover, morphing classes and their relationship to both ARs and SRs were highly correlated. Consistent with the previously done research, symmetrical images were found more attractive. We found that although ARs were the same for left versus right composites, for SRs, there is a significant difference between left and right. Finally, a more elucidative quantification approach for subjective face perception is achieved through significant correlations of entropy scores with both ARs and SRs.M.S. - Master of Scienc
SELECTIVELY MANIPULATING SOFTNESS PERCEPTION OF MATERIALS THROUGH SOUND SYMBOLISM
<p>Cross-modal interactions between auditory and haptic perception manifest themselves in language, such as sound symbolic words: crunch, splash, and creak. Several studies have shown strong associations between sound symbolic words, shapes (e.g., Bouba/Kiki effect), and materials. Here, we identified these material associations in Turkish sound symbolic words and then tested for their effect on softness perception. First, we used a rating task in a semantic differentiation method to extract the perceived softness dimensions from words and materials. We then tested whether Turkish onomatopoeic words can be used to manipulate the perceived softness of everyday materials such as honey, silk, or sand across different dimensions of softness. In the first preliminary study, we used 40 material videos and 29 adjectives in a rating task with a semantic differentiation method to extract the main softness dimensions. A principal component analysis revealed 7 softness components, including Deformability, Viscosity, Surface Softness, and Granularity, in line with the literature. The second preliminary study used 47 Turkish onomatopoeic words and 31 adjectives in the same rating task. Again, the findings aligned with the literature, revealing dimensions such as Fluidity, Granularity, and Surface Softness. However, no factors related to Deformability were found due to the absence of sound symbolic words in this category. Next, we paired the onomatopoeic words and material videos based on their associations with each softness dimension. We conducted a new rating task, synchronously presenting material videos and spoken onomatopoeic words. We hypothesized that congruent word-video pairs would produce significantly higher ratings for dimension-related adjectives, while incongruent word-video pairs would decrease these ratings, and the ratings of unrelated adjectives would remain the same. Our results revealed that onomatopoeic words selectively alter the perceived material qualities, providing evidence and insight into the cross-modality of perceived softness.</p>