714 research outputs found
Blickpunktabhängige Computergraphik
Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die Realität hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit führt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhängige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trägt zu vier Bereichen blickpunktabhängiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle Qualität von Videos für den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit Unterstützung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle Realität (VR) und Erweiterte Realität (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhängigen Rendern nutzt. Die Qualität des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells für jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand für das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhängige Algorithmen die Darstellungsqualität von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender Bildqualität der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lässt
Recommended from our members
Visual Adaptations and Behavioural Strategies to Detect and Catch Small Targets
Predatory behaviours are ideal for studying the limits of performance and control within animals. Predation naturally creates a competition between the sensors and physiology of predator and prey. Aerial predation demonstrates the greatest feats of physical performance, demanding the highest speeds and accelerations whilst both predator and prey are free to pitch, yaw, and roll. These high speeds and degrees of rotational freedom make control a complex problem. However, from the perspective of the researcher attempting to decipher the control laws that underpin predator guidance, the question is made more soluble by the predator’s fixation on its target. The goal of the pursuer is clear, to contact the target, and thus their systems are focused on the optimization of that action. This is as opposed to more mundane activities, where conflicting interests compete for the attention and behavioural response of the animal. In order to study the necessary trade-offs that underpin aerial predation, this thesis will focus on the hunting behaviour of two fly species. The first is a robber fly, Holcocephala fusca, on which the majority of the first two chapters focus. Secondarily, work with the killer fly Coenosia attenuata will be included in the latter two chapters as a direct contrast to results from Holcocephala. Both are miniature dipteran predators, but not closely related. The structure of this thesis is broken into six chapters, summarised in the following list:
1. Thecompoundeyeofinsectsgenerallyhasmuchpoorerresolutionthanthatofcameratype eyes. Poor resolution is exacerbated in smaller insects that cannot commit the resources required for eyes with large lenses that facilitate high spatial resolution. Holcocephala has developed a small number of facets into a forward-facing acute zone where the spatial acuity is reduced to ~0.28°, rivalling the very best resolution of any compound eye. The only compound eyes with a comparable spatial resolution belong to dragonflies, in excess of an order of magnitude larger than Holcocephala.
2. Numerous potential targets may be airborne within the visual range of a predator. Not all of these may be suitable. Chasing unsuitable targets may waste energy or result in direct harm should they turn out to be larger than the predator can overcome. It is thus a strong imperative for a predator to filter the targets it takes after. Targets silhouetted against the sky display a paucity of cues that a predator could use to determine their size. Holcocephala displays acute size selectivity towards smaller targets. This selectivity goes beyond heuristic rules and size/speed ratios. Instead, Holcocephala appears able to determine absolute size and distance of targets.
3. Both Holcocephala and Coenosia intercept targets, heading for where the target is going to be in the future rather than its current location. Both species plot trajectories in keeping with the guidance law of proportional navigation, an algorithm derived for modern guided missiles. There are key differences evident in the internal physiological constants applied to the control system between the species. These differences are likely linked to the specific environmental conditions and visual physiologies of the flies, especially the range at which targets are attacked.
4. Stemming from the use of the proportional navigational framework, this chapter dives into the intricacies of gain and the weighting of the navigational constant, and the geometric factors that underpin the control effort and eventual success of the control system.
5. “Falcon-diving” can be found in killer flies dropping from their enclosure ceiling, in which they miss targets after diving towards them. Through proportional navigation, it can be demonstrated that the navigational system combined with excessive speed results in acceleration demands the body cannot match.
6. Holcocephala is capable of evading static obstacle whilst intercepting targets. Application of proportional navigation and a secondary obstacle-evasive controller can demonstrate where the fly is combining multiple inputs to guide its heading.This work was funded by the United States Airforce Office of Scientific Research
Aerial Vehicles
This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space
Towards Smarter Fluorescence Microscopy: Enabling Adaptive Acquisition Strategies With Optimized Photon Budget
Fluorescence microscopy is an invaluable technique for studying the intricate process of organism development. The acquisition process, however, is associated with the fundamental trade-off between the quality and reliability of the acquired data. On one hand, the goal of capturing the development in its entirety, often times across multiple spatial and temporal scales, requires extended acquisition periods. On the other hand, high doses of light required for such experiments are harmful for living samples and can introduce non-physiological artifacts in the normal course of development. Conventionally, a single set of acquisition parameters is chosen in the beginning of the acquisition and constitutes the experimenter’s best guess of the overall optimal configuration within the aforementioned trade-off. In the paradigm of adaptive microscopy, in turn, one aims at achieving more efficient photon budget distribution by dynamically adjusting the acquisition parameters to the changing properties of the sample. In this thesis, I explore the principles of adaptive microscopy and propose a range of improvements for two real imaging scenarios.
Chapter 2 summarizes the design and implementation of an adaptive pipeline for efficient observation of the asymmetrically dividing neurogenic progenitors in Zebrafish retina. In the described approach the fast and expensive acquisition mode is automatically activated only when the mitotic cells are present in the field of view. The method illustrates the benefits of the adaptive acquisition in the common scenario of the individual events of interest being sparsely distributed throughout the duration of the acquisition.
Chapter 3 focuses on computational aspects of segmentation-based adaptive schemes for efficient acquisition of the developing Drosophila pupal wing. Fast sample segmentation is shown to provide a valuable output for the accurate evaluation of the sample morphology and dynamics in real time. This knowledge proves instrumental for adjusting the acquisition parameters to the current properties of the sample and reducing the required photon budget with minimal effects to the quality of the acquired data.
Chapter 4 addresses the generation of synthetic training data for learning-based methods in bioimage analysis, making them more practical and accessible for smart microscopy pipelines. State-of-the-art deep learning models trained exclusively on the generated synthetic data are shown to yield powerful predictions when applied to the real microscopy images. In the end, in-depth evaluation of the segmentation quality of both real and synthetic data-based models illustrates the important practical aspects of the approach and outlines the directions for further research
X-ray computed tomography
X-ray computed tomography (CT) can reveal the internal details of objects in three dimensions non-destructively. In this Primer, we outline the basic principles of CT and describe the ways in which a CT scan can be acquired using X-ray tubes and synchrotron sources, including the different possible contrast modes that can be exploited. We explain the process of computationally reconstructing three-dimensional (3D) images from 2D radiographs and how to segment the 3D images for subsequent visualization and quantification. Whereas CT is widely used in medical and heavy industrial contexts at relatively low resolutions, here we focus on the application of higher resolution X-ray CT across science and engineering. We consider the application of X-ray CT to study subjects across the materials, metrology and manufacturing, engineering, food, biological, geological and palaeontological sciences. We examine how CT can be used to follow the structural evolution of materials in three dimensions in real time or in a time-lapse manner, for example to follow materials manufacturing or the in-service behaviour and degradation of manufactured components. Finally, we consider the potential for radiation damage and common sources of imaging artefacts, discuss reproducibility issues and consider future advances and opportunities
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Modeling the Human Visuo-Motor System for Remote-Control Operation
University of Minnesota Ph.D. dissertation. 2018. Major: Computer Science. Advisors: Nikolaos Papanikolopoulos, Berenice Mettler. 1 computer file (PDF); 172 pages.Successful operation of a teleoperated miniature rotorcraft relies on capabilities including guidance, trajectory following, feedback control, and environmental perception. For many operating scenarios fragile automation systems are unable to provide adequate performance. In contrast, human-in-the-loop systems demonstrate an ability to adapt to changing and complex environments, stability in control response, high level goal selection and planning, and the ability to perceive and process large amounts of information. Modeling the perceptual processes of the human operator provides the foundation necessary for a systems based approach to the design of control and display systems used by remotely operated vehicles. In this work we consider flight tasks for remotely controlled miniature rotorcraft operating in indoor environments. Operation of agile robotic systems in three dimensional spaces requires a detailed understanding of the perceptual aspects of the problem as well as knowledge of the task and models of the operator response. When modeling the human-in-the-loop the dynamics of the vehicle, environment, and human perception-action are tightly coupled in space and time. The dynamic response of the overall system emerges from the interplay of perception and action. The main questions to be answered in this work are: i) what approach does the human operator implement when generating a control and guidance response? ii) how is information about the vehicle and environment extracted by the human? iii) can the gaze patterns of the pilot be decoded to provide information for estimation and control? In relation to existing research this work differs by focusing on fast acting dynamic systems in multiple dimensions and investigating how the gaze can be exploited to provide action-relevant information. To study human-in-the-loop systems the development and integration of the experimental infrastructure is described. Utilizing the infrastructure, a theoretical framework for computational modeling of the human pilot’s perception-action is proposed and verified experimentally. The benefits of the human visuo-motor model are demonstrated through application examples where the perceptual and control functions of a teleoperation system are augmented to reduce workload and provide a more natural human-machine interface
- …