249 research outputs found

    Perception of physical stability and center of mass of 3-D objects

    Get PDF
    Humans can judge from vision alone whether an object is physically stable or not. Such judgments allow observers to predict the physical behavior of objects, and hence to guide their motor actions. We investigated the visual estimation of physical stability of 3-D objects (shown in stereoscopically viewed rendered scenes) and how it relates to visual estimates of their center of mass (COM). In Experiment 1, observers viewed an object near the edge of a table and adjusted its tilt to the perceived critical angle, i.e., the tilt angle at which the object was seen as equally likely to fall or return to its upright stable position. In Experiment 2, observers visually localized the COM of the same set of objects. In both experiments, observers´ settings were compared to physical predictions based on the objects´ geometry. In both tasks, deviations from physical predictions were, on average, relatively small. More detailed analyses of individual observers´ settings in the two tasks, however, revealed mutual inconsistencies between observers´ critical-angle and COM settings. The results suggest that observers did not use their COM estimates in a physically correct manner when making visual judgments of physical stability

    Categorical perception of tactile distance

    Get PDF
    The tactile surface forms a continuous sheet covering the body. And yet, the perceived distance between two touches varies across stimulation sites. Perceived tactile distance is larger when stimuli cross over the wrist, compared to when both fall on either the hand or the forearm. This effect could reflect a categorical distortion of tactile space across body-part boundaries (in which stimuli crossing the wrist boundary are perceptually elongated) or may simply reflect a localised increased in acuity surrounding anatomical landmarks (in which stimuli near the wrist are perceptually elongated). We tested these two interpretations, by comparing a well-documented bias to perceive mediolateral tactile distances across the forearm/hand as larger than proximodistal ones along the forearm/hand at three different sites (hand, wrist, and forearm). According to the ‘categorical’ interpretation, tactile distances should be elongated selectively in the proximodistal axis thus reducing the anisotropy. According to the ‘localised acuity’ interpretation, distances will be perceptually elongated in the vicinity of the wrist regardless of orientation, leading to increased overall size without affecting anisotropy. Consistent with the categorical account, we found a reduction in the magnitude of anisotropy at the wrist, with no evidence of a corresponding specialized increase in precision. These findings demonstrate that we reference touch to a representation of the body that is categorically segmented into discrete parts, which consequently influences the perception of tactile distance

    Creating correct aberrations: why blur isn’t always bad in the eye

    Get PDF
    In optics in general, a sharp aberration-free image is normally the desired goal, and the whole field of adaptive optics has developed with the aim of producing blur-free images. Likewise, in ophthalmic optics we normally aim for a sharp image on the retina. But even with an emmetropic, or well-corrected eye, chromatic and high order aberrations affect the image. We describe two different areas where it is important to take these effects into account and why creating blur correctly via rendering can be advantageous. Firstly we show how rendering chromatic aberration correctly can drive accommodation in the eye and secondly report on matching defocus-l generated using rendering with conventional optical defocus

    Songbird dynamics under the sea : acoustic interactions between humpback whales suggest song mediates male interactions

    Get PDF
    © The Author(s), 2018. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Royal Society Open Science 5 (2018): 171298, doi:10.1098/rsos.171298.The function of song has been well studied in numerous taxa and plays a role in mediating both intersexual and intrasexual interactions. Humpback whales are among few mammals who sing, but the role of sexual selection on song in this species is poorly understood. While one predominant hypothesis is that song mediates male–male interactions, the mechanism by which this may occur has never been explored. We applied metrics typically used to assess songbird interactions to examine song sequences and movement patterns of humpback whale singers. We found that males altered their song presentation in the presence of other singers; focal males increased the rate at which they switched between phrase types (p = 0.005), and tended to increase the overall evenness of their song presentation (p = 0.06) after a second male began singing. Two-singer dyads overlapped their song sequences significantly more than expected by chance. Spatial analyses revealed that change in distance between singers was related to whether both males kept singing (p = 0.012), with close approaches leading to song cessation. Overall, acoustic interactions resemble known mechanisms of mediating intrasexual interactions in songbirds. Future work should focus on more precisely resolving how changes in song presentation may be used in competition between singing males.D.M.C. was supported by an EPA Science to Achieve Results (STAR) Fellowship for PhD research

    Mutism, selective mutism, staying silent, pause – serious gaps in the theory and practice of speech therapy

    Get PDF
    The article discusses the importance of the scientific description of mutism. The paper starts with linguistic arguments and talks about the phenomenon from the perspective of speech disorders typology and from the perspective of the mute person himself/herself. It suggests that the tempo of utterance (which is different from the tempo of speech) should be included in the study of mutism. It also discusses the competence of speech therapists to diagnose and program the therapy of mute people. The conclusion of the article is that the interdisciplinary research of mutism would allow for better understanding of this phenomena and a more effective therapy.Udostępnienie publikacji Wydawnictwa Uniwersytetu Łódzkiego finansowane w ramach projektu „Doskonałość naukowa kluczem do doskonałości kształcenia”. Projekt realizowany jest ze środków Europejskiego Funduszu Społecznego w ramach Programu Operacyjnego Wiedza Edukacja Rozwój; nr umowy: POWER.03.05.00-00-Z092/17-00

    ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

    Get PDF
    Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It may thereby minimize the adverse effects of vergence-accommodation conflicts

    Deep neural networks for automated detection of marine mammal species

    Get PDF
    Authors thank the Bureau of Ocean Energy Management for the funding of MARU deployments, Excelerate Energy Inc. for the funding of Autobuoy deployment, and Michael J. Weise of the US Office of Naval Research for support (N000141712867).Deep neural networks have advanced the field of detection and classification and allowed for effective identification of signals in challenging data sets. Numerous time-critical conservation needs may benefit from these methods. We developed and empirically studied a variety of deep neural networks to detect the vocalizations of endangered North Atlantic right whales (Eubalaena glacialis). We compared the performance of these deep architectures to that of traditional detection algorithms for the primary vocalization produced by this species, the upcall. We show that deep-learning architectures are capable of producing false-positive rates that are orders of magnitude lower than alternative algorithms while substantially increasing the ability to detect calls. We demonstrate that a deep neural network trained with recordings from a single geographic region recorded over a span of days is capable of generalizing well to data from multiple years and across the species’ range, and that the low false positives make the output of the algorithm amenable to quality control for verification. The deep neural networks we developed are relatively easy to implement with existing software, and may provide new insights applicable to the conservation of endangered species.Publisher PDFPeer reviewe

    Improve automatic detection of animal call sequences with temporal context

    Get PDF
    Funding: This work was supported by the US Office of Naval Research (grant no. N00014-17-1-2867).Many animals rely on long-form communication, in the form of songs, for vital functions such as mate attraction and territorial defence. We explored the prospect of improving automatic recognition performance by using the temporal context inherent in song. The ability to accurately detect sequences of calls has implications for conservation and biological studies. We show that the performance of a convolutional neural network (CNN), designed to detect song notes (calls) in short-duration audio segments, can be improved by combining it with a recurrent network designed to process sequences of learned representations from the CNN on a longer time scale. The combined system of independently trained CNN and long short-term memory (LSTM) network models exploits the temporal patterns between song notes. We demonstrate the technique using recordings of fin whale (Balaenoptera physalus) songs, which comprise patterned sequences of characteristic notes. We evaluated several variants of the CNN + LSTM network. Relative to the baseline CNN model, the CNN + LSTM models reduced performance variance, offering a 9-17% increase in area under the precision-recall curve and a 9-18% increase in peak F1-scores. These results show that the inclusion of temporal information may offer a valuable pathway for improving the automatic recognition and transcription of wildlife recordings.Publisher PDFPeer reviewe

    Learning deep models from synthetic data for extracting dolphin whistle contours

    Get PDF
    We present a learning-based method for extracting whistles of toothed whales (Odontoceti) in hydrophone recordings. Our method represents audio signals as time-frequency spectrograms and decomposes each spectrogram into a set of time-frequency patches. A deep neural network learns archetypical patterns (e.g., crossings, frequency modulated sweeps) from the spectrogram patches and predicts time-frequency peaks that are associated with whistles. We also developed a comprehensive method to synthesize training samples from background environments and train the network with minimal human annotation effort. We applied the proposed learn-from-synthesis method to a subset of the public Detection, Classification, Localization, and Density Estimation (DCLDE) 2011 workshop data to extract whistle confidence maps, which we then processed with an existing contour extractor to produce whistle annotations. The F1-score of our best synthesis method was 0.158 greater than our baseline whistle extraction algorithm (~25% improvement) when applied to common dolphin (Delphinus spp.) and bottlenose dolphin (Tursiops truncatus) whistles.Postprin

    A supramodal representation of the body surface

    Get PDF
    The ability to accurately localize both tactile and painful sensations on the body is one of the most important functions of the somatosensory system. Most accounts of localization refer to the systematic spatial relation between skin receptors and cortical neurons. The topographic organization of somatosensory neurons in the brain provides a map of the sensory surface. However, systematic distortions in perceptual localization tasks suggest that localizing a somatosensory stimulus involves more than simply identifying specific active neural populations within a somatotopic map. Thus, perceptual localization may depend on both afferent inputs and other unknown factors. In four experiments, we investigated whether localization biases vary according to the specific skin regions and subset of afferent fibers stimulated. We represented localization errors as a ‘perceptual map’ of skin locations. We compared the perceptual maps of stimuli that activate Aβ (innocuous touch), Aδ (pinprick pain), and C fibers (non-painful heat) on both the hairy and glabrous skin of the left hand. Perceptual maps exhibited systematic distortions that strongly depended on the skin region stimulated. We found systematic distal and radial (i.e., towards the thumb) biases in localization of touch, pain, and heat on the hand dorsum. A less consistent proximal bias was found on the palm. These distortions were independent of the population of afferent fibers stimulated, and also independent of the response modality used to report localization. We argue that these biases are likely to have a central origin, and result from a supramodal representation of the body surface
    corecore