493 research outputs found
Digital hologram recording systems: some performance improvements
The work presented in this thesis was performed under the EU's Framework
7 (FP7) project, 'REAL3D'. The aim of this project is to develop methods
based on digital holography for real time capture and display of 3D objects.
This thesis forms a small subset of all the work done in this project. Much of
the research work was aimed towards fullling our part of the requirements
of the REAL3D project. The central theme of the research presented in
this thesis is that of improving the performance of the digital holographic
imaging system for its use in 3D display. This encompasses research into
speed up of reconstruction algorithms, understanding the in
uence of noise
and developing techniques to increase resolution and angular perspective
range in reconstructions.
The main original contributions of this research work presented in this thesis
are:
A computer-interfaced automatic digital holographic imaging
system employing `phase shifting' has been built. This system
is capable of recording high-quality digital holograms of a real world
3D object. The object can be rotated on a rotational stage and a full
360 range of perspectives can be recorded. Speckle reduction using
moving diusers can be performed to improve the image quality of the
reconstructed images. A LabView based user friendly interface has
been developed.
Novel methods based on space-time tradeo and xed point
arithmetic have been developed and implemented for speed-
ing up the reconstruction algorithm used in digital holography.
This has resulted in the publication of one peer-reviewed journal pub-
lication and one conference proceeding [1, 2].
The in
uence of additive noise, particularly quantization noise
in digital holography has been studied in detail. A model
has been developed to understand the in
uence of noise on the re-
constructed image quality. Based on this model, a method has been
developed to suppress quantization noise in a memory ecient man-
ner. This work led to the publication of two peer-reviewed journal
publications [3, 4].
A novel method of removing the twin image has been devel-
oped.
Methods to increase the perspectives in holography based on
synthetic aperture have been implemented.
Apart from these primary contributions, the author of this thesis has
also contributed in the form of assisting in experiments, creating gures
for various papers, writing computer programs and discussions during
group meetings. In total, 6 peer-reviewed journal papers (3 being
primary author) have been published and 6 conference proceedings (3
being primary author) have been published. Additionally, 2 talks have
been given at international conferences
Recommended from our members
A reciprocal 360-degree 3D light-field image acquisition and display system
A reciprocal 360-degree three-dimensional light-field image acquisition and display system was designed using a common catadioptric optical configuration and a lens array. Proof-of-concept experimental setups were constructed with a full capturing part and a truncated display section to demonstrate that the proposed design works without loss of generality. Unlike conventional setups, which record and display rectangular volumes, the proposed configuration records 3D images from its surrounding spherical volume in the capture mode and project 3D images to the same spherical volume in the display mode. This is particularly advantageous in comparison to other 360-degree multi-camera and multiple projector display systems which require extensive image and physical calibration. We analysed the system and showed the quality measures such as angular resolution and space bandwidth product based on design parameters.
The issue due to the pixel size difference between the available imaging sensor and the display was also addressed. A diffractive microlens array matching the sensor size is used in the acquisition part whereas
a vacuum cast lens array matching the display size is used in the display part with scaled optics. The experimental results demonstrate the proposed system design works well and in good agreement with
the simulation results.CAPE Acorn 2017 Awar
NASA Tech Briefs Index, 1977, volume 2, numbers 1-4
Announcements of new technology derived from the research and development activities of NASA are presented. Abstracts, and indexes for subject, personal author, originating center, and Tech Brief number are presented for 1977
Robust and real-time hand detection and tracking in monocular video
In recent years, personal computing devices such as laptops, tablets and smartphones have become ubiquitous. Moreover, intelligent sensors are being integrated into many consumer devices such as eyeglasses, wristwatches and smart televisions. With the advent of touchscreen technology, a new human-computer interaction (HCI) paradigm arose that allows users to interface with their device in an intuitive manner. Using simple gestures, such as swipe or pinch movements, a touchscreen can be used to directly interact with a virtual environment. Nevertheless, touchscreens still form a physical barrier between the virtual interface and the real world.
An increasingly popular field of research that tries to overcome this limitation, is video based gesture recognition, hand detection and hand tracking. Gesture based interaction allows the user to directly interact with the computer in a natural manner by exploring a virtual reality using nothing but his own body language.
In this dissertation, we investigate how robust hand detection and tracking can be accomplished under real-time constraints. In the context of human-computer interaction, real-time is defined as both low latency and low complexity, such that a complete video frame can be processed before the next one becomes available. Furthermore, for practical applications, the algorithms should be robust to illumination changes, camera motion, and cluttered backgrounds in the scene. Finally, the system should be able to initialize automatically, and to detect and recover from tracking failure. We study a wide variety of existing algorithms, and propose significant improvements and novel methods to build a complete detection and tracking system that meets these requirements.
Hand detection, hand tracking and hand segmentation are related yet technically different challenges. Whereas detection deals with finding an object in a static image, tracking considers temporal information and is used to track the position of an object over time, throughout a video sequence. Hand segmentation is the task of estimating the hand contour, thereby separating the object from its background.
Detection of hands in individual video frames allows us to automatically initialize our tracking algorithm, and to detect and recover from tracking failure. Human hands are highly articulated objects, consisting of finger parts that are connected with joints. As a result, the appearance of a hand can vary greatly, depending on the assumed hand pose. Traditional detection algorithms often assume that the appearance of the object of interest can be described using a rigid model and therefore can not be used to robustly detect human hands. Therefore, we developed an algorithm that detects hands by exploiting their articulated nature. Instead of resorting to a template based approach, we probabilistically model the spatial relations between different hand parts, and the centroid of the hand. Detecting hand parts, such as fingertips, is much easier than detecting a complete hand. Based on our model of the spatial configuration of hand parts, the detected parts can be used to obtain an estimate of the complete hand's position. To comply with the real-time constraints, we developed techniques to speed-up the process by efficiently discarding unimportant information in the image. Experimental results show that our method is competitive with the state-of-the-art in object detection while providing a reduction in computational complexity with a factor 1 000. Furthermore, we showed that our algorithm can also be used to detect other articulated objects such as persons or animals and is therefore not restricted to the task of hand detection.
Once a hand has been detected, a tracking algorithm can be used to continuously track its position in time. We developed a probabilistic tracking method that can cope with uncertainty caused by image noise, incorrect detections, changing illumination, and camera motion. Furthermore, our tracking system automatically determines the number of hands in the scene, and can cope with hands entering or leaving the video canvas. We introduced several novel techniques that greatly increase tracking robustness, and that can also be applied in other domains than hand tracking. To achieve real-time processing, we investigated several techniques to reduce the search space of the problem, and deliberately employ methods that are easily parallelized on modern hardware. Experimental results indicate that our methods outperform the state-of-the-art in hand tracking, while providing a much lower computational complexity.
One of the methods used by our probabilistic tracking algorithm, is optical flow estimation. Optical flow is defined as a 2D vector field describing the apparent velocities of objects in a 3D scene, projected onto the image plane. Optical flow is known to be used by many insects and birds to visually track objects and to estimate their ego-motion. However, most optical flow estimation methods described in literature are either too slow to be used in real-time applications, or are not robust to illumination changes and fast motion. We therefore developed an optical flow algorithm that can cope with large displacements, and that is illumination independent.
Furthermore, we introduce a regularization technique that ensures a smooth flow-field. This regularization scheme effectively reduces the number of noisy and incorrect flow-vector estimates, while maintaining the ability to handle motion discontinuities caused by object boundaries in the scene.
The above methods are combined into a hand tracking framework which can be used for interactive applications in unconstrained environments. To demonstrate the possibilities of gesture based human-computer interaction, we developed a new type of computer display. This display is completely transparent, allowing multiple users to perform collaborative tasks while maintaining eye contact. Furthermore, our display produces an image that seems to float in thin air, such that users can touch the virtual image with their hands. This floating imaging display has been showcased on several national and international events and tradeshows.
The research that is described in this dissertation has been evaluated thoroughly by comparing detection and tracking results with those obtained by state-of-the-art algorithms. These comparisons show that the proposed methods outperform most algorithms in terms of accuracy, while achieving a much lower computational complexity, resulting in a real-time implementation. Results are discussed in depth at the end of each chapter. This research further resulted in an international journal publication; a second journal paper that has been submitted and is under review at the time of writing this dissertation; nine international conference publications; a national conference publication; a commercial license agreement concerning the research results; two hardware prototypes of a new type of computer display; and a software demonstrator
Design and information considerations for holographic television
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1988.Title as it appeared in MIT Graduate list, June, 1988: Information and design considerations for holographic television.Includes bibliographical references.Supported by the USWEST Advanced Technology, Inc.by Joel S. Kollin.M.S
Apukaasu virtauksen simulointi laser leikkauksessa.
Työn päämääränä oli esittää laskennallinen CFD-malli apukaasuvirtaukselle laser leikkauksessa. Mallin tarkoituksena on auttaa apukaasusuutinten suunnittelussa. Työssä käsiteltiin myös apukaasun roolia laser leikkauksessa, jotta lukija kykenisi paremmin ymmärtämään työn päämäärää.
Simulaatiot suoritettiin Star-CCM+ CFD-ohjelmistolla. Simuloinnissa käytettiin tavanomaista eriytettyä ratkaisijaa yhdistetyn ratkaisijan sijasta, jotta laskenta voitaisiin pitää mahdollisimman kevyenä. Tästä on erityisesti hyötyä, kun monimutkaisempia malleja yhdistetään kaasuvirtaukseen. Työssä vertailtiin myös kahta eri viskositeetti mallia. Malleiksi valittiin Sutherlandin-laki ja vakio viskositeetti malli. Molemmat mallit antoivat samankaltaisia tuloksia, mutta Sutherlandinlain käyttö aiheutti numeerisia ongelmia. Tästä syystä vakio viskositeetti malli oli sopivampi kyseiseen ongelmaan.
Laskentatulosten paikkansapitävyyttä arvioitiin Schelieren-kuvien avulla. Vertailussa tultiin lopputulokseen, että kyseinen malli pystyi ennustamaan kaasuvirtauksen Lavalsuuttimessa riittävän hyvin
Data extraction in holographic particle image velocimetry
Holographic Particle Image Velocimetry (HPIV) is potentially the best technique to
obtain instantaneous, three-dimensional, flow field information. Several researchers have
presented their experimental results to demonstrate the power of HPIV technique.
However, the challenge to find an economical and automatic means to extract and process
the immense amount of data from the holograms still remains. This thesis reports on the
development of complex amplitude correlation as a means of data extraction. At the same
time, three-dimensional quantitative measurements for a micro scale flow is of increasing
importance in the design of microfluidic devices. This thesis also reports the investigation
of HPIV in micro-scale fluid flow.
The author has re-examined complex amplitude correlation using a formulation of scalar
diffraction in three-dimensional vector space. [Continues.
The 1974 NASA-ASEE summer faculty fellowship aeronautics and space research program
Research activities by participants in the fellowship program are documented, and include such topics as: (1) multispectral imagery for detecting southern pine beetle infestations; (2) trajectory optimization techniques for low thrust vehicles; (3) concentration characteristics of a fresnel solar strip reflection concentrator; (4) calaboration and reduction of video camera data; (5) fracture mechanics of Cer-Vit glass-ceramic; (6) space shuttle external propellant tank prelaunch heat transfer; (7) holographic interferometric fringes; and (8) atmospheric wind and stress profiles in a two-dimensional internal boundary layer
Design and implementation of a digital holographic microscope with fast autofocusing
Holography is a method for three-dimensional (3D) imaging of objects by applying interferometric analysis. A recorded hologram is required to be reconstructed in order to image an object. However one needs to know the appropriate reconstruction distance prior to the hologram reconstruction, otherwise the reconstruction is out-of-focus. If the focus distance of the object is not known priori, then it must be estimated using an autofocusing technique. Traditional autofocusing techniques used in image processing literature can also be applied to digital holography. In this thesis, eleven common sharpness functions developed for standard photography and microscopy are applied to digital holograms, and the estimation of the focus distances of holograms is investigated. The magnitude of a recorded hologram is quantitatively evaluated for its sharpness while it is reconstructed on an interval, and the reconstruction distance which yields the best quantitative result is chosen as the true focus distance of the hologram. However autofocusing of highresolution digital holograms is very demanding in means of computational power. In this thesis, a scaling technique is proposed for increasing the speed of autofocusing in digital holographic applications, where the speed of a reconstruction is improved on the order of square of the scale-ratio. Experimental results show that this technique offers a noticeable improvement in the speed of autofocusing while preserving accuracy greatly. However estimation of the true focus point with very high amounts of scaling becomes unreliable because the scaling method detriments the sharpness curves produced by the sharpness functions. In order to measure the reliability of autofocusing with the scaling technique, fifty computer generated holograms of gray-scale human portrait, landscape and micro-structure images are created. Afterwards, autofocusing is applied to the scaleddown versions of these holograms as the scale-ratio is increased, and the autofocusing performance is statistically measured as a function of the scale-ratio. The simulation results are in agreement with the experimental results, and they show that it is possible to apply the scaling technique without losing significant reliability in autofocusing
- …