608 research outputs found
Computerised stereoscopic measurement of the human retina
The research described herein is an investigation into the problems of obtaining useful clinical measurements from stereo photographs of the human retina through automation of the stereometric procedure by digital stereo matching and image analysis techniques. Clinical research has indicated a correlation between physical changes to the optic disc topography (the region on the retina where the optic nerve enters the eye) and the advance of eye disease such as hypertension and glaucoma. Stereoscopic photography of the human retina (or fundus, as it is called) and the subsequent measurement of the topography of the optic disc is of great potential clinical value as an aid in observing the pathogenesis of such disease, and to this end, accurate measurements of the various parameters that characterise the changing shape of the optic disc topography must be provided. Following a survey of current clinical methods for stereoscopic measurement of the optic disc, fundus image data acquisition, stereo geometry, limitations of resolution and accuracy, and other relevant physical constraints related to fundus imaging are investigated. A survey of digital stereo matching algorithms is presented and their strengths and weaknesses are explored, specifically as they relate to the suitability of the algorithm for the fundus image data. The selection of an appropriate stereo matching algorithm is discussed, and its application to four test data sets is presented in detail. A mathematical model of two-dimensional image formation is developed together with its corresponding auto-correlation function. In the presense of additive noise, the model is used as a tool for exploring key problems with respect to the stereo matching of fundus images. Specifically, measures for predicting correlation matching error are developed and applied. Such measures are shown to be of use in applications where the results of image correlation cannot be independently verified, and meaningful quantitative error measures are required. The application of these theoretical tools to the fundus image data indicate a systematic way to measure, assess and control cross-correlation error. Conclusions drawn from this research point the way forward for stereo analysis of the optic disc and highlight a number of areas which will require further research. The development of a fully automated system for diagnostic evaluation of the optic disc topography is discussed in the light of the results obtained during this research
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
Recommended from our members
End-to-end 3D video communication over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Three-dimensional technology, more commonly referred to as 3D technology, has revolutionised many fields including entertainment, medicine, and communications to name a few. In addition to 3D films, games, and sports channels, 3D perception has made tele-medicine a reality. By the year 2015, 30% of the all HD panels at home will be 3D enabled, predicted by consumer electronics manufacturers. Stereoscopic cameras, a comparatively mature technology compared to other 3D systems, are now being used by ordinary citizens to produce 3D content and share at a click of a button just like they do with the 2D counterparts via sites like YouTube. But technical challenges still exist, including with autostereoscopic multiview displays. 3D content requires many complex considerations--including how to represent it, and deciphering what is the best compression format--when considering transmission or storage, because of its increased amount of data. Any decision must be taken in the light of the available bandwidth or storage capacity, quality and user expectations. Free viewpoint navigation also remains partly unsolved. The most pressing issue getting in the way of widespread uptake of consumer 3D systems is the ability to deliver 3D content to heterogeneous consumer displays over the heterogeneous networks. Optimising 3D video communication solutions must consider the entire pipeline, starting with optimisation at the video source to the end display and transmission optimisation. Multi-view offers the most compelling solution for 3D videos with motion parallax and freedom from wearing headgear for 3D video perception. Optimising multi-view video for delivery and display could increase the demand for true 3D in the consumer market. This thesis focuses on an end-to-end quality optimisation in 3D video communication/transmission, offering solutions for optimisation at the compression, transmission, and decoder levels.Brunel University - Isambard Research Scholarshi
Selected Problems in Photogrammetric Systems Analysis
Disertační práce se zabývá vybranými partiemi digitální fotogrammetrie. V první části práce je definované téma a popsán současný stav poznání. V následujících kapitolách jsou postupně řešeny čtyři dílčí navzájem navazující cíle. První oblastí je návrh metody pro hledání souhlasných bodů v obraze. Byly navrženy dvě nové metody. První z nich používá konverzi snímků do nepravých barev a druhá využívá pravděpodobností model získaný ze známých párů souhlasných bodů. Druhým tématem je analýza přesnosti výsledné rekonstrukce prostorových bodů. Postupně je analyzován vliv různých faktorů na přesnost rekonstrukce. Stěžejní oblastí je zkoumání vlivu chybného zarovnání kamer a chyby v určení souhlasných bodů. Třetím tématem je tvorba hloubkových map. Byly navrženy dva postupy. První přístup spočívá v kombinaci pasivní a aktivní metody druhý přístup vychází z pasivní metody a využívá spojitosti hloubkové mapy. Poslední zvolenou oblastí zájmu je hodnocení kvality 3D videa. Byly provedeny a statisticky vyhodnoceny subjektvní testy 3D vjemu pro různé zobrazovací systémy v závislosti na úhlu pozorováníThis dissertation deals with selected topics of digital photogrammetry. The problem is defined and the state of the art is described in the first part of the dissertation. Four specified aims are solved. The proposal of the method for finding corresponding points is the first topic. Two new methods were proposed. The first method uses conversion of an image to pseudo- colors. The second method used a probabilistic model obtained from the known pairs of the corresponding points. The analysis of the accuracy of the reconstruction is the second solved topic. The influence of the various aspects to the accuracy of the reconstruction is analyzed. The most attention is paid to incorrect camera alignment and errors in finding corresponding points. The third topic is estimation of the depth maps. The two method were proposed. The first method is based on the combination of the passive and active method. The second wholly passive approach uses continuity of the depth map. The last investigative topic is quality of experience of the 3D videos. The subjective tests of the perception of 3D content for the various 3D displaying systems were performed. The dependency of the perception on the viewing angle was also investigated.
Widening the view angle of auto-multiscopic display, denoising low brightness light field data and 3D reconstruction with delicate details
This doctoral thesis will present the results of my work into widening the viewing angle
of the auto-multiscopic display, denoising light filed data the enhancement of captured
light filed data captured in low light circumstance, and the attempts on reconstructing
the subject surface with delicate details from microscopy image sets.
The automultiscopic displays carefully control the distribution of emitted light over
space, direction (angle) and time so that even a static image displayed can encode
parallax across viewing directions (light field). This allows simultaneous observation by
multiple viewers, each perceiving 3D from their own (correct) perspective. Currently,
the illusion can only be effectively maintained over a narrow range of viewing angles.
We propose and analyze a simple solution to widen the range of viewing angles for
automultiscopic displays that use parallax barriers. We insert a refractive medium, with
a high refractive index, between the display and parallax barriers. The inserted medium
warps the exitant lightfield in a way that increases the potential viewing angle. We
analyze the consequences of this warp and build a prototype with a 93% increase in
the effective viewing angle. Additionally, we developed an integral images synthesis
method that can address the refraction introduced by the inserted medium efficiently
without the use of ray tracing.
Capturing light field image with a short exposure time is preferable for eliminating
the motion blur but it also leads to low brightness in a low light environment, which
results in a low signal noise ratio. Most light field denoising methods apply regular 2D
image denoising method to the sub-aperture images of a 4D light field directly, but it
is not suitable for focused light field data whose sub-aperture image resolution is too
low to be applied regular denoising methods. Therefore, we propose a deep learning
denoising method based on micro lens images of focused light field to denoise the depth
map and the original micro lens image set simultaneously, and achieved high quality
total focused images from the low focused light field data.
In areas like digital museum, remote researching, 3D reconstruction with delicate
details of subjects is desired and technology like 3D reconstruction based on macro
photography has been used successfully for various purposes. We intend to push it
further by using microscope rather than macro lens, which is supposed to be able to
capture the microscopy level details of the subject. We design and implement a scanning
method which is able to capture microscopy image set from a curve surface based on
robotic arm, and the 3D reconstruction method suitable for the microscopy image set
Modern lithographic techniques applied to stereographic imaging
The main aim of the research has been to produce and evaluate a high-quality diffusion
screen to display projected film and television images. The screens have also been found
to effectively de-pixelate LCD arrays viewed at a magnification of approximately 4x.
The production process relies on the formation of localized refractive index gradients in a
photopolymer. The photopolymer, specially formulated and supplied by Du Pont, is
exposed to actinic light through a precision contact mask to initiate polymerization within
the exposed areas. As polymerization proceeds, a monomer concentration gradient exists
between the exposed and unexposed regions allowing the monomer molecules to diffuse.
Since the longer polymer chains do not diffuse as readily, the molecular concentration of
the material, which is related to its refractive index, is then no longer uniform. The
generation of this refractive index profile can, to some extent, be controlled by careful
exposure of the photopolymer through the correct mask so that the resulting diffusion
screen can be tailored to suit specific viewing requirements. [Continues.
Perceived Depth Control in Stereoscopic Cinematography
Despite the recent explosion of interest in the stereoscopic 3D (S3D) technology, the ultimate prevailing of the S3D medium is still significantly hindered by adverse effects regarding the S3D viewing discomfort. This thesis attempts to improve the S3D viewing experience by investigating perceived depth control methods in stereoscopic cinematography on desktop 3D displays. The main contributions of this work are: (1) A new method was developed to carry out human factors studies on identifying the practical limits of the 3D Comfort Zone on a given 3D display. Our results suggest that it is necessary for cinematographers to identify the specific limits of 3D Comfort Zone on the target 3D display as different 3D systems have different ranges for the 3D Comfort Zone. (2) A new dynamic depth mapping approach was proposed to improve the depth perception in stereoscopic cinematography. The results of a human-based experiment confirmed its advantages in controlling the perceived depth in viewing 3D motion pictures over the existing depth mapping methods. (3) The practicability of employing the Depth of Field (DoF) blur technique in S3D was also investigated. Our results indicate that applying the DoF blur simulation on stereoscopic content may not improve the S3D viewing experience without the real time information about what the viewer is looking at. Finally, a basic guideline for stereoscopic cinematography was introduced to summarise the new findings of this thesis alongside several well-known key factors in 3D cinematography. It is our assumption that this guideline will be of particular interest not only to 3D filmmaking but also to 3D gaming, sports broadcasting, and TV production
Architectural Digital Photogrammetry
This study is to exploit texturing techniques of a common modelling software in the way of creating virtual models of an exist architectures using oriented panoramas. In this research, The panoramic image-based interactive modelling is introduced as assembly point of photography, topography, photogrammetry and modelling techniques. It is an interactive system for generating photorealistic, textured 3D models of architectural structures and urban scenes.
The technique is suitable for the architectural survey because it is not a «point by point» survey, and it exploit the geometrical constraints in the architecture to simplify modelling.
Many factors are presented to be critical features that affect the modelling quality and accuracy, such as the way and the position in shooting the photos, stitching the multi-image panorama photos, the orientation, texturing techniques and so on.
During the last few years, many Image-based modelling programmes have been released. Whereas, in this research, the photo modelling programs was not in use, it meant to face the fundamentals of the photogrammetry and to go beyond the limitations of such software by avoiding the automatism. In addition, it meant to exploit the potent commands of a program as 3DsMax to obtain the final representation of the Architecture. Such representation can be used in different fields (from detailed architectural survey to an architectural representation in cinema and video games), considering the accuracy and the quality which they are vary too.
After the theoretical studies of this technique, it was applied in four applications to different types of close range surveys. This practice allowed to comprehend the practical problems in the whole process (from photographing all the way to modelling) and to propose the methods in the ways to improve it and to avoid any complications. It was compared with the laser scanning to study the accuracy of this technique.
Thus, it is realized that not only the accuracy of this technique is linked to the size of the surveyed object, but also the size changes the way in which the survey to be approached.
Since the 3D modelling program is not dedicated to be used for the image-based modelling, texturing problems was faced. It was analyzed in: how the program can behave with the Bitmap, how to project it, how it could be an interactive projection, and what are the limitations
- …