649 research outputs found
Nonperturbing measurements of spatially distributed underwater acoustic fields using a scanning laser Doppler vibrometer
Localized changes in the density of water induced by the presence of an acoustic field cause
perturbations in the localized refractive index. This relationship has given rise to a number of
nonperturbing optical metrology techniques for recording measurement parameters from underwater
acoustic fields. A method that has been recently developed involves the use of a Laser Doppler
Vibrometer (LDV) targeted at a fixed, nonvibrating, plate through an underwater acoustic field.
Measurements of the rate of change of optical pathlength along a line section enable the
identification of the temporal and frequency characteristics of the acoustic wave front. This
approach has been extended through the use of a scanning LDV, which facilitates the measurement
of a range of spatially distributed parameters. A mathematical model is presented that relates the
distribution of pressure amplitude and phase in a planar wave front with the rate of change of optical
pathlength measured by the LDV along a specifically orientated laser line section. Measurements of
a 1 MHz acoustic tone burst generated by a focused transducer are described and the results
presented. Graphical depictions of the acoustic power and phase distribution recorded by the LDV
are shown, together with images representing time history during the acoustic wave propagation
Characterization of Microparticles through Digital Holography
In this work, digital holography (DH) is extensively utilized to characterize microparticles. Here, “characterization” refers to the determination of a particle’s shape, size, and, in some cases, its surface structure. A variety of microparticles, such as environmental dust, pollen, volcanic ash, clay, and biological samples, are thoroughly analyzed. In this technique, the microscopically fine interference pattern generated by the coherent superposition of an object and a reference wave fields is digitally recorded using an optoelectronic sensor, in the form of a hologram, and the desired particle property is then computationally extracted by performing a numerical reconstruction to form an image of the particle. The objective of this work is to explore, develop, and demonstrate the feasibility of different experimental arrangements to reconstruct the image of various arbitrary-shaped particles. Both forward- and backward-scattering experimental arrangements are constructed and calibrated to quantify the size of several micron-sized particles. The performance and implications of the technique are validated using the National Institute of Standards and Technology (NIST)-traceable borosilicate glass microspheres of various diameters and a Thorlabs resolution plate. After successful validation and calibration of the system, the resolution limit of the experimental setup is estimated, which is ~10 microns. Particles smaller than 10 microns in size could not be imaged well enough to ensure that what appeared like a single particle was not in fact a cluster. The forward- and backward-scattering holograms of different samples are recorded simultaneously and images of the particles are then computationally reconstructed from these recorded holograms. Our results show that the forward- and backward-scattering images yield different information on the particle surface structure and edge roughness, and thus, reveal more information about a particle profile. This suggests that the two image perspectives reveal aspects of the particle structure not available from a more commonly used forward-scattering based image alone. The results of this work could be supportive to insight more on the particles’ morphology and subsequently important for the advancement of contactree particle characterization technique
Flame front propagation velocity measurement and in-cylinder combustion reconstruction using POET
The objective of this thesis is to develop an intelligent diagnostic technique
POET (Passive Optical Emission Tomography) for the investigation of in cylinder
combustion chemiluminescence. As a non-intrusive optical system, the POET system
employs 40 fibre optic cables connected to 40 PMTs (Photo Multiplier Tube) to
monitor the combustion process and flame front propagation in a modified commercial
OHV (Over Head Valve) Pro 206 IC engine.
The POET approach overcomes several limitations of present combustion
research methods using a combination of fibre optic detection probes, photomultipliers
and a tomographic diagnostics. The fibre optic probes are placed on a specially
designed cylinder head gasket for non-invasively inserting cylinder. Each independent
probe can measure the turbulent chemiluminescence of combustion flame front at up to
20 kHz. The resultant intensities can then be gathered tomographically using MART
(Multiplicative Algebraic Reconstruction Technique) software to reconstruct an image
of the complete flame-front. The approach is essentially a lensless imaging technique,
which has the advantage of not requiring a specialized engine construction with
conventional viewing ports to visualize the combustion image. The fibre optic system,
through the use of 40, 2m long thermally isolated fibre optic cables can withstand
combustion temperatures and is immune from electronic noise, typically generated by
the spark plug.
The POET system uses a MART tomographic methodology to reconstruct the turbulent combustion process. The data collected has been reconstructed to produce a
temporal and spatial image of the combustion flame front. The variations of lame
turbulence are monitored by sequences of reconstructed images. Therefore, the POET
diagnostic technique reduces the complications of classic flame front propagation
measurement systems and successfully demonstrates the in-cylinder combustion
process.
In this thesis, a series of calibration exercises have been performed to ensure
that the photomultipliers of the POET system have sufficient temporal and spatial
resolution to quantitatively map the flow velocity turbulence and chemiluminescence
of the flame front. In the results, the flame has been analyzed using UV filters and blue
filters to monitor the modified natural gas fuel engine. The flame front propagation
speed has been evaluated and it is, on average, 12 m/s at 2280 rpm. Sequences of
images have been used to illustrate the combustion explosion process at different rpm
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding
Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience
Quantitative and automatic analysis of interferometric fringe data using carrier fringe and FFT techniques
Computerised analysis of optical fringe pattern is a rapidly developing approach to extract quantitative data from phase encoded intensity distribution. This thesis describes results of investigations of quantitative and automatic analysis of interference fringe data using carrier fringe and FFT techniques. Several automatic and semiautomatic fringe analysis algorithms that enable the reduction of fringe patterns to quantitative data have been reviewed and illustrated with some examples. A fresh holographic contouring approach by the movement of object beams through fibre optics is described. The use of fibre optics provides a simple method for obtaining contouring fringes in the holographic system.
A carrier fringe technique for measuring surface deformation is described and verified by experiments. A theoretical analysis of the carrier fringe technique is given. The effects of carrier frequency on holographic fringe data has been investigated with computer generated holograms. In contrast to conventional holography and fringe analysis, this holographic system based on fibre optics and automatic spatial carrier fringe analysis technique. The FFT approach is used to process the interferograms. An excellent correlation between the theoretical deformation profile and that suggested by the technique is given. The accuracy of the measurement for a centrally loaded aluminum disk is 0.05pm.
The design and construction of a computerised photoelastic stress measurement system is discussed. This full field, fully automated photoelastic stress measurement system is a new approach to photoelastic fringe analysis. Linear carrier fringes generated using quartz wedge are superimposed on fringes formed by the stressed model. The resultant fringes pattern is then captured using a CCD camera and stored in a digital frame buffer. A FFT method has been used to process the complete photoelastic fringe image over the whole surface of the model. The whole principal stress difference field has been calculated and plotted from one single video frame
The Acoustic Hologram and Particle Manipulation with Structured Acoustic Fields
This book shows how arbitrary acoustic wavefronts can be encoded in the thickness profile of a phase plate - the acoustic hologram. The workflow for design and implementation of these elements has been developed and is presented in this work along with examples in microparticle assembly, object propulsion and levitation in air. To complement these results, a fast thermographic measurement technique has been developed to scan and validate 3D ultrasound fields in a matter of seconds
Scalable multi-view stereo camera array for real world real-time image capture and three-dimensional displays
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (leaves 71-75).The number of three-dimensional displays available is escalating and yet the capturing devices for multiple view content are focused on either single camera precision rigs that are limited to stationary objects or the use of synthetically created animations. In this work we will use the existence of inexpensive digital CMOS cameras to explore a multi- image capture paradigm and the gathering of real world real-time data of active and static scenes. The capturing system can be developed and employed for a wide range of applications such as portrait-based images for multi-view facial recognition systems, hypostereo surgical training systems, and stereo surveillance by unmanned aerial vehicles. The system will be adaptable to capturing the correct stereo views based on the environmental scene and the desired three-dimensional display. Several issues explored by the system will include image calibration, geometric correction, the possibility of object tracking, and transfer of the array technology into other image capturing systems. These features provide the user more freedom to interact with their specific 3-D content while allowing the computer to take on the difficult role of stereoscopic cinematographer.Samuel L. Hill.S.M
Phase-space representation of digital holographic and light field imaging with application to two-phase flows
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 125-133).In this thesis, two computational imaging techniques used for underwater research, in particular, two-phase flows measurements, are presented. The techniques under study, digital holographic imaging and light field imaging, are targeted at different flow conditions. In low-density flows, particles and air bubbles in water can be imaged by a digital holographic imaging system to provide 3D flow information. In the high density case, both occlusions and scattering become significant, imaging through these partial occlusions to achieve object detection is possible by integrating views from multiple perspectives, which is the principle of light field imaging. The analyses on the digital holographic and light field imaging systems are carried out under the framework of phase-space optics. In the holographic imaging system, it is seen that, by tracking the Space bandwidth transfer, the information transformation through a digital holographic imaging system can be traced. The inverse source problem of holography can be solved in certain cases by posing proper priori constraints. As is in the application to two-phase flows, 3D positions of bubbles can be computed by well tuned focus metrics. Size statistical distribution of the bubbles can also be obtained from the reconstructed images.(cont.) Light field is related to the Wigner distribution through the generalized radiance function. One practical way to sample the Wigner distribution is to take intensity measurements behind an aperture which is moving laterally in the field. Two types of imaging systems, the light field imaging and the integral imaging, realize this Wigner sampling scheme. In the light field imaging, the aperture function is a rect function; while a sinc aperture function in the integral imaging. Axial ranging through the object space can be realized by digital refocusing. In addition, imaging through partial occlusion is possible by integrating properly selected Wigner samples.by Lei Tian.S.M
Capturing Culture: The Practical Application of Holographic Recording for Artefacts Selected from the Heritage and Museums of the Arabian Peninsula
Recording cultural heritage is one of the most important issues for consideration in the twenty- first century. Safeguarding, protecting and preserving heritage, through effective mechanism, is of crucial importance. Holographic technology has the potential to offer an appropriate solution to solve issues in documenting, cataloguing and replaying the original optical information of the artefact in three-dimensional imaging.
This thesis investigates the relationship between art and technology through holograms recorded as part of a practice-based research programme. It questions whether the holographic medium can be used to capture and disseminate information for use in audience interaction, and therefore raise public awareness, by solving the problem of displaying the original artefacts outside the museum context. Using holographic records of such valuable items has the potential to save them from being lost or destroyed, and opens up the prospect of a new form of virtual museum.
This research examines the possibility of recording valuable and priceless artefacts using a mobile holographic recording system designed for museums. To this end, historical, traditional and cultural artefacts on display in Saudi heritage museums have been selected. This project involves the recording of ancient Arabian Peninsula cultural heritage, and in particular jewellery artefacts that we perceive as three-dimensional images created, using holographic wavefront information. The research adopts both qualitative and quantitative research methods and critical review of relevant literature on the holographic medium to determine how it might provide an innovative method of engaging museums in Saudi Arabia. The findings of this research offer an original contribution to knowledge and understanding for scholars concerned with conservation of Saudi Arabia’s cultural heritage
- …