105 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Magnitude estimation in humans

    Get PDF
    Anyone who has climbed a mountain before knows that the perceived distance walked depends on more than just its physical length. This intriguing relationship between physical and experienced magnitudes has fascinated researchers across various disciplines for more than 200 years. Part of the enthusiasm is driven by the fact that, although magnitudes, as well as the sensory organs with which we measure them, differ in so many ways, there are unifying principles in behavior common to all types of magnitudes estimated. In this thesis, the general characteristics of human magnitude estimation are studied in the case of visual path integration. The aim is to clarify the role of a-priori knowledge on the estimate of magnitude and to provide a unifying mathematical framework that explains the behavior. In particular, we investigated human linear and angular displacement estimation in different experimental situations with varying experience-dependent and abstract a-priori knowledge. We find systematic behavioral characteristics that are omnipresent in magnitude estimation studies, like the range effect, the regression effect or scalar variability. These characteristics are explained by a general model that combines a logarithmic scaling of magnitudes according to the Weber-Fechner law with the concept of Bayesian inference. The model incorporates apriori knowledge about the stimulus and updates this knowledge on a trial-by-trial basis. The resulting iterative Bayesian estimation accounts for the aforementioned behavioral characteristics and provides a link between the two most well-known laws in psychophysics: the Weber-Fechner and Stevens’ powerlaw. This work provides substantial evidence that magnitude estimation is not purely driven by sensation but underlies perceptual estimation processes that exploit and incorporate different types of information sources, in particular short-term prior experience. The proposed mathematical framework is likely applicable to magnitude estimation across different modalities and consequently contributes to a unifying account of the behavior

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Non-Standard Imaging Techniques

    Get PDF
    The first objective of the thesis is to investigate the problem of reconstructing a small-scale object (a few millimeters or smaller) in 3D. In Chapter 3, we show how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking. The second objective of the thesis is modelling the dual-pixel (DP) camera. In Chapter 4, to understand the potential of the DP sensor for computer vision applications, we study the formation of the DP pair which links the blur and the depth information. A mathematical DP model is proposed which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image . Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularize our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches. Another (third) objective of this thesis is to tackle the multifocus image fusion problem, particularly for long multifocus image sequences. Multifocus image stacking/fusion produces an in-focus image of a scene from a number of partially focused images of that scene in order to extend the depth of field. One of the limitations of the current state of the art multifocus fusion methods is not considering image registration/alignment before fusion. Consequently, fusing unregistered multifocus images produces an in-focus image containing misalignment artefacts. In Chapter 5, we propose image registration by projective transformation before fusion to remove the misalignment artefacts. We also propose a method based on 3D deconvolution to retrieve the in-focus image by formulating the multifocus image fusion problem as a 3D deconvolution problem. The proposed method achieves superior performance compared to the state of the art methods. It is also shown that, the proposed projective transformation for image registration can improve the quality of the fused images. Moreover, we implement a multifocus simulator to generate synthetic multifocus data from any RGB-D dataset. The fourth objective of this thesis is to explore new ways to detect the polarization state of light. To achieve the objective, in Chapter 6, we investigate a new optical filter namely optical rotation filter for detecting the polarization state with a fewer number of images. The proposed method can estimate polarization state using two images, one with the filter and another without. The accuracy of estimating the polarization parameters using the proposed method is almost similar to that of the existing state of the art method. In addition, the feasibility of detecting the polarization state using only one RGB image captured with the optical rotation filter is also demonstrated by estimating the image without the filter from the image with the filter using a generative adversarial network

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Magnitude estimation in humans

    Get PDF
    Anyone who has climbed a mountain before knows that the perceived distance walked depends on more than just its physical length. This intriguing relationship between physical and experienced magnitudes has fascinated researchers across various disciplines for more than 200 years. Part of the enthusiasm is driven by the fact that, although magnitudes, as well as the sensory organs with which we measure them, differ in so many ways, there are unifying principles in behavior common to all types of magnitudes estimated. In this thesis, the general characteristics of human magnitude estimation are studied in the case of visual path integration. The aim is to clarify the role of a-priori knowledge on the estimate of magnitude and to provide a unifying mathematical framework that explains the behavior. In particular, we investigated human linear and angular displacement estimation in different experimental situations with varying experience-dependent and abstract a-priori knowledge. We find systematic behavioral characteristics that are omnipresent in magnitude estimation studies, like the range effect, the regression effect or scalar variability. These characteristics are explained by a general model that combines a logarithmic scaling of magnitudes according to the Weber-Fechner law with the concept of Bayesian inference. The model incorporates apriori knowledge about the stimulus and updates this knowledge on a trial-by-trial basis. The resulting iterative Bayesian estimation accounts for the aforementioned behavioral characteristics and provides a link between the two most well-known laws in psychophysics: the Weber-Fechner and Stevens’ powerlaw. This work provides substantial evidence that magnitude estimation is not purely driven by sensation but underlies perceptual estimation processes that exploit and incorporate different types of information sources, in particular short-term prior experience. The proposed mathematical framework is likely applicable to magnitude estimation across different modalities and consequently contributes to a unifying account of the behavior

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display

    Get PDF
    Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference
    corecore