10,010 research outputs found

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter

    Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization

    Full text link
    Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15% using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine

    Foot Detection Method for Footwear Augmented Reality Applications

    Get PDF
    Liitreaalsus on populaarsust koguv platvorm rõivaste ning aksessuaaride kasutamise visualiseerimiseks. Ideaalis võimaldab see kasutajatel proovida erinevaid riideid, jalatseid ja aksessuaare, kasutades ainult üht kaamerat ning sobivat rakendust, mis võimaldab kuvada erinevaid valikuid.\n\rJalatsite liitreaalsuses on palju erinevaid lahendusi, et pakkuda kasutajatele liitreaalsuse kogemust. Need lahendused kasutavad erinevaid meetodeid, nagu fikseeritud kaamera, muutumatu taust ja markerid jalgadel tuvastuse hõlbustamiseks. Nende meetodite hulgas pole ükski kindlalt parem, lihtsam või kiirem. Lisaks puudub tihtipeale avalikkusel ligipääs arendatud rakendustele.\n\rKäesolev magistritöö proovis leida universaalset lahendust, mis sobiks kasutamiseks kõigi tulevaste jalatsite liitreaalsuse rakendustega.Augmented reality is gaining popularity as a technique for visualizing apparel usage. Ide-ally it allows users virtually to try out different clothes, shoes, and accessories, with only a camera and suitable application which encompasses different apparel choices.\n\rFocusing on augmented reality for footwear, there is a multitude of different solutions on how to offer the reality augmentation experience to the end users. These solutions employ different methods to deliver the end result, such as using fixed camera and constant back-ground or requiring markers on feet for detection. Among the variety of techniques used to approach the footwear reality augmentation, there is no single best, simplest, or fastest solution. The solutions’ sources aren’t usually even publicly available. \n\rThis thesis tries to come up with a solution for the footwear reality augmentation problem, which can be used as a base for any proceeding footwear augmented reality projects. This intentionally universal approach will be created by researching possible combinations of potential methods that can ensure a solutions regarding footwear reality augmentation. \n\rIn general, the idea behind this thesis work is to conduct a literature review about different techniques and come up with the best and robust algorithm or combination of methods that can be used for footwear augmented reality.\n\rA researched, documented, implemented and publicized solution would allow any upcom-ing footwear augmented reality related project to start working from an established base, therefore reducing time waste on already solved issues and possibly improving the quality of the end result.\n\rThe solution presented in this thesis is developed with focus on augmented reality applica-tions. The method is neither specific to any platform nor does it have heavy location re-quirements. The result is a foot detection algorithm, capable of working on commonly available hardware, which is beneficial for augmented reality application

    Surface Shape Perception in Volumetric Stereo Displays

    Get PDF
    In complex volume visualization applications, understanding the displayed objects and their spatial relationships is challenging for several reasons. One of the most important obstacles is that these objects can be translucent and can overlap spatially, making it difficult to understand their spatial structures. However, in many applications, for example medical visualization, it is crucial to have an accurate understanding of the spatial relationships among objects. The addition of visual cues has the potential to help human perception in these visualization tasks. Descriptive line elements, in particular, have been found to be effective in conveying shape information in surface-based graphics as they sparsely cover a geometrical surface, consistently following the geometry. We present two approaches to apply such line elements to a volume rendering process and to verify their effectiveness in volume-based graphics. This thesis reviews our progress to date in this area and discusses its effects and limitations. Specifically, it examines the volume renderer implementation that formed the foundation of this research, the design of the pilot study conducted to investigate the effectiveness of this technique, the results obtained. It further discusses improvements designed to address the issues revealed by the statistical analysis. The improved approach is able to handle visualization targets with general shapes, thus making it more appropriate to real visualization applications involving complex objects

    A Three-Dimensional Heads-Up Primary Navigation Reference Display for Paratroopers Performing High Altitude High Open Jumps

    Get PDF
    The Department of Defense (DoD) relies on the para-dropping of resources to meet different objectives in order to accomplish missions during peace-time, war-time, or military operations other than war. The resources dropped to the ground via parachute range from supplies and equipment to the most valued asset, people. Tactics have been developed to increase the safety of troops parachuting into areas of conflict. These tactics include high-altitude high-opening (HAHO) jumping and night jumping. HAHO jumping allows paratroopers to travel large distances in the air away from the path of the delivering aircraft. While night jumping, done with the aid of night vision goggles (NVGs), provides paratroopers with the cover of night. Both of these tactics aid in avoiding detection. These techniques, however, have their drawback: low cloud cover and fog can often delay mission accomplishment due to a lack of visibility. However, low cloud cover and foggy conditions also provide a tremendous aid in covert insertion missions and enhance the element of surprise. This research introduces a novel application combining three-dimensional graphics and GPS for a primary navigation reference for paratroopers. It uses three-dimensional graphics to realistically portray a paratrooper\u27s movement in the physical world, measured by GPS, as movement in a computer generated scene. This reference, presented as a heads-up display on the NGVs paratroopers already wear, facilitates mission accomplishment in cloudy and foggy conditions. Evaluation of a prototype system validates the effectiveness of such a three-dimensional navigation reference for paratroopers

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming
    corecore