75 research outputs found

    Optical MEMS

    Get PDF
    Optical microelectromechanical systems (MEMS), microoptoelectromechanical systems (MOEMS), or optical microsystems are devices or systems that interact with light through actuation or sensing at a micro- or millimeter scale. Optical MEMS have had enormous commercial success in projectors, displays, and fiberoptic communications. The best-known example is Texas Instruments’ digital micromirror devices (DMDs). The development of optical MEMS was impeded seriously by the Telecom Bubble in 2000. Fortunately, DMDs grew their market size even in that economy downturn. Meanwhile, in the last one and half decade, the optical MEMS market has been slowly but steadily recovering. During this time, the major technological change was the shift of thin-film polysilicon microstructures to single-crystal–silicon microsructures. Especially in the last few years, cloud data centers are demanding large-port optical cross connects (OXCs) and autonomous driving looks for miniature LiDAR, and virtual reality/augmented reality (VR/AR) demands tiny optical scanners. This is a new wave of opportunities for optical MEMS. Furthermore, several research institutes around the world have been developing MOEMS devices for extreme applications (very fine tailoring of light beam in terms of phase, intensity, or wavelength) and/or extreme environments (vacuum, cryogenic temperatures) for many years. Accordingly, this Special Issue seeks to showcase research papers, short communications, and review articles that focus on (1) novel design, fabrication, control, and modeling of optical MEMS devices based on all kinds of actuation/sensing mechanisms; and (2) new developments of applying optical MEMS devices of any kind in consumer electronics, optical communications, industry, biology, medicine, agriculture, physics, astronomy, space, or defense

    Random access spectral imaging

    Get PDF
    A salient goal of spectral imaging is to record a so-called hyperspectral data-cube, consisting of two spatial and one spectral dimension. Traditional approaches are based on either time-sequential scanning in either the spatial or spectral dimension: spatial scanning involves passing a fixed aperture over a scene in the manner of a raster scan and spectral scanning is generally based on the use of a tuneable filter, where typically a series of narrow-band images of a fixed field of view are recorded and assembled into the data-cube. Such techniques are suitable only when the scene in question is static or changes slower than the scan rate. When considering dynamic scenes a time-resolved (snapshot) spectral imaging technique is required. Such techniques acquire the whole data-cube in a single measurement, but require a trade-off in spatial and spectral resolution. These trade-offs prevent current snapshot spectral imaging techniques from achieving resolutions on par with time-sequential techniques. Any snapshot device needs to have an optical architecture that allows it to gather light from the scene and map it to the detector in a way that allows the spatial and spectral components can be de-multiplexed to reconstruct the data-cube. This process results in the decreased resolution of snapshot devices as it becomes a problem of mapping a 3D data-cube onto a 2D detector. The sheer volume of data present in the data-cube also presents a processing challenge, particularly in the case of real-time processing. This thesis describes a prototype snapshot spectral imaging device that employs a random-spatial-access technique to record spectra only from the regions of interest in the scene, thus enabling maximisation of integration time and minimisation of data volume and recording rate. The aim of this prototype is to demonstrate how a particular optical architecture will allow for the effect of some of the above mentioned bottlenecks to be removed. Underpinning the basic concept is the fact that in all practical scenes most of the spectrally interesting information is contained in relatively few pixels. The prototype system uses random-spatial-access to multiple points in the scene considered to be of greatest interest. This enables time-resolved high resolution spectrometry to be made simultaneously at points across the full field of view. The enabling technology for the prototype was a digital micromirror device (DMD), which is an array of switchable mirrors that was used to create a two channel system. One channel was to a conventional imaging camera, while the other was to a spectrometer. The DMD acted as a dynamic aperture to the spectrometer and could be used to open and close slits in any part of the spectrometer aperture. The imaging channel was used to guide the selection of points of interest from the scene. An extensive geometric calibration was performed to determine the relationships between the DMD and two channels of the system. Two demonstrations of the prototype are given in this thesis: a dynamic biological scene and a static scene sampled using statistical sampling methods enabled by the dynamic aperture of the system. The dynamic scene consisted of red blood cells in motion and also undergoing a process of de-oxygenation which resulted in a change in the spectrum. Ten red blood cells were tracked across the scene and the expected change in spectrum was observed. For the second example the prototype was modified for Raman spectroscopy by adding laser illumination, a mineral sample was scanned and used to test statistical sampling methods. These methods exploited the re-configurable aperture of the system to sample the scene using blind random sampling and a grid based sampling approach. Other spectral imaging systems have a fixed aperture and cannot operate such sampling schemes

    Optical Design And Development Of A Micromirror Based High Accuracy Confocal Microscope

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (PhD) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu tez kapsamında yeni bir mikro ayna dizinli optik anahtarların yardımı ile yeni bir tip konfokal mikroskop geliştirilmiştir. Geliştirilen bu yeni ölçme sisteminde mikro ayna dizini ölçülecek yüzey üzerine 1-1 görüntülenmiş ve bu sayede yüzlerce noktanın aynı anda video frekansında ölçülmesi gerçekleştirilmiştir.. 3D yüzeyin elde edilmesi çeşitli yüksekliklerde elde edilen 2D bilgilerinin üst üste getirilmesi ile oluşturulmuştur. Bu çalışma sırasında mikroskobun optik tasarımı ve elde edilen optik sistemin geliştirilme aşamaları bütün detayları ile verilmiş, geliştirilen deneysel düzenek detayları ile tartışılmıştır. Ölçümler sırasında 50x ve 0.95 NA sahip mikroskop objektifi ile yapılan ölçümlerde yatay çözünürlük değeri 1,5 µm olarak ölçülmüştür. Sonuçların bu kadar iyi olmasında kullanılan mikro ayna dizinli elemanın önemli bir rolü olmuştur. Geliştirilen sistemin ölçme kapasitesi farklı ölçme standartları kullanılarak sistemin performans kapasitesi örneklendirilmiştir.During this project a new version of confocal microscope, where transverse surface (x,y) scanning is performed by a digital micromirror device (DMD) is developed. The DMD is imaged onto the object’s surface allowing for confocal surface scanning of the field of view at a rate faster than video rate without physical movement of the sample. 3D surface reconstruction is performed with stacks of 2D image planes acquired at different depths. Optical system design issues, solutions and detailed description of the experimental setup are presented. During experiments by using 100x with 0.95 NA objective 1,5 µm Full width half maximum (FWHM) is obtained. Mainly the optical resolution of the developed system is obtained with the help of DMD unit. The 3D image capabilities of the developed system using DMD unit were demonstrated on various test objects.DoktoraPh

    Dual-mode room temperature self-calibrating photodiodes approaching cryogenic radiometer uncertainty

    Get PDF
    The room temperature dual-mode self-calibrating detector combines low-loss photodiodes with electrical substitution radiometry for determination of optical power. By using thermal detection as a built-in reference in the detector, the internal losses of the photodiode can be determined directly, without the need of an external reference. Computer simulations were used to develop a thermal design that minimises the electro-optical non-equivalence in electrical substitution. Based on this thermal design, we produced detector modules that we mounted in a trap structure for minimised reflection loss. The thermal simulations predicted a change in response of around 280 parts per million per millimeter when changing the position of the beam along the centre line of the photodiode, and we were able to reproduce this change experimentally. We report on dual-mode internal loss estimation measurements with radiation of 488 nm at power levels of 500 μW, 875 μW and 1250 μW, using two different methods of electrical substitution. In addition, we present three different calculation algorithms for determining the optical power in thermal mode, all three showing consistent results. We present room temperature optical power measurements at an uncertainty level approaching that of the cryogenic radiometer with 400 ppm (k = 2), where the type A standard uncertainty in the thermal measurement only contributed with 26 ppm at 1250 μW in a 6 hour long measurement sequenc

    Meshfree Approximation Methods For Free-form Optical Surfaces With Applications To Head-worn Displays

    Get PDF
    Compact and lightweight optical designs achieving acceptable image quality, field of view, eye clearance, eyebox size, operating across the visible spectrum, are the key to the success of next generation head-worn displays. The first part of this thesis reports on the design, fabrication, and analysis of off-axis magnifier designs. The first design is catadioptric and consists of two elements. The lens utilizes a diffractive optical element and the mirror has a free-form surface described with an x-y polynomial. A comparison of color correction between doublets and single layer diffractive optical elements in an eyepiece as a function of eye clearance is provided to justify the use of a diffractive optical element. The dual-element design has an 8 mm diameter eyebox, 15 mm eye clearance, 20 degree diagonal full field, and is designed to operate across the visible spectrum between 450-650 nm. 20% MTF at the Nyquist frequency with less than 3% distortion has been achieved in the dual-element head-worn display. An ideal solution for a head-worn display would be a single free-form surface mirror design. A single surface mirror does not have dispersion; therefore, color correction is not required. A single surface mirror can be made see-through by machining the appropriate surface shape on the opposite side to form a zero power shell. The second design consists of a single off-axis free-form mirror described with an x-y polynomial, which achieves a 3 mm diameter exit pupil, 15 mm eye relief, and a 24 degree diagonal full field of view. The second design achieves 10% MTF at the Nyquist frequency set by the pixel spacing of the VGA microdisplay with less than 3% distortion. Both designs have been fabricated using diamond turning techniques. Finally, this thesis addresses the question of what is the optimal surface shape for a single mirror constrained in an off-axis magnifier configuration with multiple fields? Typical optical surfaces implemented in raytrace codes today are functions mapping two dimensional vectors to real numbers. The majority of optical designs to-date have relied on conic sections and polynomials as the functions of choice. The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries. The choice of polynomials from the point of view of surface description can be challenged. A polynomial surface description may link a designer s understanding of the wavefront aberrations and the surface description. The limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from approximation theory. This thesis proposes and applies radial basis functions to represent free-form optical surfaces as an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of specific head-worn displays. The benefits include, for example, the performance increase measured by the MTF, or the ability to increase the field of view or pupil size. Even though Zernike polynomials are a complete and orthogonal set of basis over the unit circle and they can be orthogonalized for rectangular or hexagonal pupils using Gram-Schmidt, taking practical considerations into account, such as optimization time and the maximum number of variables available in current raytrace codes, for the specific case of the single off-axis magnifier with a 3 mm pupil, 15 mm eye relief, 24 degree diagonal full field of view, we found the Gaussian radial basis functions to yield a 20% gain in the average MTF at 17 field points compared to a Zernike (using 66 terms) and an x-y polynomial up to and including 10th order. The linear combination of radial basis function representation is not limited to circular apertures. Visualization tools such as field map plots provided by nodal aberration theory have been applied during the analysis of the off-axis systems discussed in this thesis. Full-field displays are used to establish node locations within the field of view for the dual-element head-worn display. The judicious separation of the nodes along the x-direction in the field of view results in well-behaved MTF plots. This is in contrast to an expectation of achieving better performance through restoring symmetry via collapsing the nodes to yield field-quadratic astigmatism

    Modeling and simulation of adaptive multimodal optical sensors for target tracking in the visible to near infrared

    Get PDF
    This work investigates an integrated aerial remote sensor design approach to address moving target detection and tracking problems within highly cluttered, dynamic ground-based scenes. Sophisticated simulation methodologies and scene phenomenology validations have resulted in advancements in artificial multimodal truth video synthesis. Complex modeling of novel micro-opto-electro-mechanical systems (MOEMS) devices, optical systems, and detector arrays has resulted in a proof of concept for a state-of-the-art imaging spectropolarimeter sensor model that does not suffer from typical multimodal image registration problems. Test methodology developed for this work provides the ability to quantify performance of a target tracking application with varying ground scenery, flight characteristics, or sensor specifications. The culmination of this research is an end-to-end simulated demonstration of multimodal aerial remote sensing and target tracking. Deeply hidden target recognition is shown to be enhanced through the fusing of panchromatic, hyperspectral, and polarimetric image modalities. The Digital Imaging and Remote Sensing Image Generation model was leveraged to synthesize truth spectropolarimetric sensor-reaching radiance image cubes comprised of coregistered Stokes vector bands in the visible to near-infrared. An intricate synthetic urban scene containing numerous moving vehicular targets was imaged from a virtual sensor aboard an aerial platform encircling a stare point. An adaptive sensor model was designed with a superpixel array of MOEMS devices fabricated atop a division of focal plane detector. Degree of linear polarization (DoLP) imagery is acquired by combining three adjacent micropolarizer outputs within each 2x2 superpixel whose respective transmissions vary with wavelength, relative angle of polarization, and wire-grid spacing. A novel micromirror within each superpixel adaptively relays light between a panchromatic imaging channel and a hyperspectral spectrometer channel. All optical and detector sensor effects were radiometrically modeled using MATLAB and optical lens design software. Orthorectification of all sensor outputs yields multimodal pseudonadir observation video at a fixed ground sampled distance across an area of responsibility. A proprietary MATLAB-based target tracker accomplishes change detection between sequential panchromatic or DoLP observation frames, and queries the sensor for hyperspectral pixels to aid in track initialization and maintenance. Image quality, spectral quality, and tracking performance metrics are reported for varying scenario parameters including target occlusions within the scene, declination angle and jitter of the aerial platform, micropolarizer diattenuation, and spectral/spatial resolution of the adaptive sensor outputs. DoLP observations were found to track moving vehicles better than panchromatic observations at high oblique angles when facing the sensor generally toward the sun. Vehicular occlusions due to tree canopies and parallax effects of tall buildings significantly reduced tracking performance as expected. Smaller MOEMS pixel sizes drastically improved track performance, but also generated a significant number of false tracks. Atmospheric haze from urban aerosols eliminated the tracking utility of DoLP observations, while aerial platform jitter without image stabilization eliminated tracking utility in both modalities. Wire-grid micropolarizers with very low VNIR diattenuation were found to still extinguish enough cross-polarized light to successfully distinguish and track moving vehicles from their urban background. Thus, state-of-the-art lithographic techniques to create finer wire-grid spacings that exhibit high VNIR diattenuation may not be required

    Applications of single-pixel imaging

    Get PDF
    In this body of work, several single-pixel imaging applications are presented, based on structured light manipulation via a Digital Micromirror Device (DMD) and a single element photodetector (PD). This is commonly known as computational single-pixel imaging, and is achieved by using the measurements made by the PD to weight a series of projected structured light-fields. This indicates the strength of correlation between each light-field, and some object or scene placed in its propagation path. After many iterations the ensemble average of the weighted structured light-field converges to the object. Historically, computational single-pixel imaging has suffered from long image acquisition times and low resolution. Inhibiting the ability of physical systems from competing with conventional imaging in any form. Advances in computer and DMD technology have opened new avenues of research for this novel imaging technique. These advances have been utilised in this work by creating fast acquisition demonstrator systems, which have real world applications, such as multi-wavelength, polarisation, and long-range imaging. Several PDs were added to allow for simultaneous measurement of multiple images in the desired application. For multi-wavelength, RGB and white light illumination was spectrally filtered on three detectors to create full-colour images. While conversely the same multi-detector approach allowed for simultaneous measurement of orthogonal linear polarisation states essential to Stokes' parameter image reconstruction. Differential projection of the structured light-fields further allowed for the single-pixel camera to compensate from some sources of real world noise, such as background illumination. This work demonstrates an evolution of the single-pixel camera. From a system capable of only imaging simple, binary transmissive objects twice per hour and constrained to an optical bench, to a semi portable camera, capable of multiple frames per second 2D reconstructions of 3D scenes over a range of 20 kilometres. These improvements in capability cement the idea that the single-pixel camera is now a viable alternate imaging technology

    Development of a time gated, compressive detection Raman instrument for effective fluorescence rejection and multilayer analysis

    Get PDF
    In this thesis we have developed and implemented a Raman instrument of a novel design that uses a combination of state-of-the-art technologies to provide a temporal dimension in the measured spectrum with the goal of effective fluorescence suppression. Combining high repetition rate, picosecond laser pulses and high temporal resolution detection systems with MEMs devices we developed a time gated Raman instrument that is capable of effective fluorescence suppression and producing time-gated Raman maps thanks to the improved signal achievable through compressive detection. Time gated Raman spectroscopy was then investigated for pigmented samples of relevance to cultural heritage and to biology. The instrument was able to recover Raman spectral information of highly fluorescent samples where a typical CCD based Raman instrument with CW laser typically fails. Multilayer samples were measured, and 3D mapping performed. By utilizing multiplexing, depth analysis was measured and calculated via photon time of flight. The design of the instrument was optimized to overcome some of the issues that plague current time gated Raman techniques (portability, low fill factors, temporal resolution). The system uses a single element single photon avalanche diode as the detector to provide the best possible temporal resolution and a digital micromirror device (DMD) as the wavelength selective component to provide a high fill factor for maximum throughput. The DMD allows multiple spectral features to be directed to the detector at any one time (multiplexed compressive detection). The increase in signal from multiplexing allows time gated Raman maps to be performed on a time scale comparable to conventional Raman instruments (seconds per pixel). This study demonstrates the feasibility of time gated Raman mapping for a range of applications

    New implementations of phase-contrast imaging

    Full text link
    Phase-contrast imaging is a method of imaging widely used in biomedical research and applications. It is a label-free method that exploits intrinsic differences in the refractive index of different tissues to differentiate between biological structures under analysis. The basic principle of phase-contrast imaging has inspired a lot of implementations that are suited for different applications. This thesis explores multiple novel implementations of phase-contrast imaging in the following order. 1, We combined scanning Oblique Back-illumination Microscope (sOBM) and confocal microscope to produce phase and fluorescence contrast images in an endomicroscopy configuration. This dual-modality design provides co-registered, complementary labeled and unlabeled contrast of the sample. We further miniaturized the probe by dispensing the two optical fibers in our old design. And we presented proof of principle demonstrations with ex-vivo mouse colon tissue. 2, Then we explored sOBM-based phase and amplitude contrast imaging under different wavelengths. Hyperspectral imaging is achieved by multiplexing a wide-range supercontinuum laser with a Michaelson interferometer (similar to Fourier transform spectroscopy). It features simultaneous acquisition of hyperspectral phase and amplitude images with arbitrarily thick scattering biological samples. Proof-of-principle demonstrations are presented with chorioallantoic membrane of a chick embryo, illustrating the possibility of high-resolution hemodynamics imaging in thick tissue. 3, We focused on increasing the throughput of flow cytometry with principle of phase-contrast imaging and compressive sensing. By utilizing the linearity of scattered patterns under partially coherent illumination, our cytometer can detect multiple objects in the same field of view. By utilizing an optimized matched filter on pupil plane, it also provides increased information capacity of each measurement without sacrificing speed. We demonstrated a throughput of over 10,000 particles/s with accuracy over 91% in our results. 4, A fourth part, which describes the principle and preliminary results of a computational fluorescence endomicroscope is also included. It uses a numerical method to achieve sectioning effect and renders a pseudo-3D image stack with a single shot. The results are compared with true-3D image stack acquired with a confocal microscope
    corecore