26,062 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Local wavelet features for statistical object classification and localisation

    Get PDF
    This article presents a system for texture-based probabilistic classification and localisation of 3D objects in 2D digital images and discusses selected applications. The objects are described by local feature vectors computed using the wavelet transform. In the training phase, object features are statistically modelled as normal density functions. In the recognition phase, a maximisation algorithm compares the learned density functions with the feature vectors extracted from a real scene and yields the classes and poses of objects found in it. Experiments carried out on a real dataset of over 40000 images demonstrate the robustness of the system in terms of classification and localisation accuracy. Finally, two important application scenarios are discussed, namely classification of museum artefacts and classification of metallography images

    Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

    Full text link
    Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article) and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix). Supplemental material including code: https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa

    Evaluation of changes in image appearance with changes in displayed image size

    Get PDF
    This research focused on the quantification of changes in image appearance when images are displayed at different image sizes on LCD devices. The final results provided in calibrated Just Noticeable Differences (JNDs) on relevant perceptual scales, allowing the prediction of sharpness and contrast appearance with changes in the displayed image size. A series of psychophysical experiments were conducted to enable appearance predictions. Firstly, a rank order experiment was carried out to identify the image attributes that were most affected by changes in displayed image size. Two digital cameras, exhibiting very different reproduction qualities, were employed to capture the same scenes, for the investigation of the effect of the original image quality on image appearance changes. A wide range of scenes with different scene properties was used as a test-set for the investigation of image appearance changes with scene type. The outcomes indicated that sharpness and contrast were the most important attributes for the majority of scene types and original image qualities. Appearance matching experiments were further conducted to quantify changes in perceived sharpness and contrast with respect to changes in the displayed image size. For the creation of sharpness matching stimuli, a set of frequency domain filters were designed to provide equal intervals in image quality, by taking into account the system’s Spatial Frequency Response (SFR) and the observation distance. For the creation of contrast matching stimuli, a series of spatial domain S-shaped filters were designed to provide equal intervals in image contrast, by gamma adjustments. Five displayed image sizes were investigated. Observers were always asked to match the appearance of the smaller version of each stimulus to its larger reference. Lastly, rating experiments were conducted to validate the derived JNDs in perceptual quality for both sharpness and contrast stimuli. Data obtained by these experiments finally converted into JND scales for each individual image attribute. Linear functions were fitted to the final data, which allowed the prediction of image appearance of images viewed at larger sizes than these investigated in this research

    Portable LCD Image Quality: Effects of Surround Luminance

    Get PDF

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Just noticeable differences in perceived image contrast with changes in displayed image size

    Get PDF
    An evaluation of the change in perceived image contrast with changes in displayed image size was carried out. This was achieved using data from four psychophysical investigations, which employed techniques to match the perceived contrast of displayed images of five different sizes. A total of twenty-four S-shape polynomial functions were created and applied to every original test image to produce images with different contrast levels. The objective contrast related to each function was evaluated from the gradient of the mid-section of the curve (gamma). The manipulation technique took into account published gamma differences that produced a just-noticeable-difference (JND) in perceived contrast. The filters were designed to achieve approximately half a JND, whilst keeping the mean image luminance unaltered. The processed images were then used as test series in a contrast matching experiment. Sixty-four natural scenes, with varying scene content acquired under various illumination conditions, were selected from a larger set captured for the purpose. Results showed that the degree of change in contrast between images of different sizes varied with scene content but was not as important as equivalent perceived changes in sharpness
    corecore