4,317 research outputs found

    A multi-aperture optical flow estimation method for an artificial compound eye

    Get PDF
    © 2019 IOS Press and the authors. All rights reserved. An artificial compound eye (ACE) is a bio-inspired vision sensor which mimics a natural compound eye (typical of insects). This artificial eye is able to visualize large fields of the outside world through multi-aperture. Due to its functioning, the ACE is subject to optical flow, that is an apparent motion of the object visualized by the eye. This paper proposes a method to estimate the optical flow based on capturing multiple images (multi-aperture). In this method, based on descriptors-based initial optical flows, a unified global energy function is presented to incorporate the information of multi-aperture and simultaneously recover the optical flows of multi-aperture. The energy function imposes a compound flow fields consistency assumption along with the brightness constancy and piecewise smoothness assumptions. This formula efficiently binds the flow field in time and space, and further enables view-consistent optical flow estimation. Experimental results on real and synthetic data demonstrate that the proposed method recovers view-consistent optical flows crossed multi-aperture and performs better than other optical flow methods on the multi-aperture images

    Keratoprostheses for corneal blindness: a review of contemporary devices

    Get PDF
    According to the World Health Organization, globally 4.9 million are blind due to corneal pathology. Corneal transplantation is successful and curative of the blindness for a majority of these cases. However, it is less successful in a number of diseases that produce corneal neovascularization, dry ocular surface and recurrent inflammation, or infections. A keratoprosthesis or KPro is the only alternative to restore vision when corneal graft is a doomed failure. Although a number of KPros have been proposed, only two devices, Boston type-1 KPro and osteo-odonto-KPro, have came to the fore. The former is totally synthetic and the latter is semi-biological in constitution. These two KPros have different surgical techniques and indications. Keratoprosthetic surgery is complex and should only be undertaken in specialized centers, where expertise, multidisciplinary teams, and resources are available. In this article, we briefly discuss some of the prominent historical KPros and contemporary devices

    Miniature curved artificial compound eyes.

    Get PDF
    International audienceIn most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort

    Linear Quasi-Parallax SfM for various classes of biological eyes

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Curve optimization for the anidolic daylight system counterbalancing energy saving, indoor visual and thermal comfort for Sydney dwellings

    Get PDF
    Daylight penetration significantly affects building thermal-daylighting performance, and serve a dual function of permitting sunlight and creating a pleasant indoor environment. More recent attention has focused on the provision of daylight in the rear part of indoor spaces in designing sustainable buildings. Passive Anidolic Daylighting Systems (ADS) are effective tools for daylight collection and redistribution of sunlight towards the back of the room. As affordable and low-maintenance systems, they can provide indoor daylight and alleviate the problem of daylight over-provision near the window and under-provision in the rear part of the room. Much of the current literature on the ADS pays particular attention to visual comfort and rarely to thermal comfort. Therefore, a reasonable compromise between visual and thermal comfort as well as energy consumption becomes the main issue for energy-optimized aperture design in the tropics and subtropics, in cities such as Sydney, Australia. The objective of the current study was to devise a system that could act as a double-performance of shade and reflective tool. The central aim of this paper is to find the optimum curve that can optimize daylight admission without an expensive active tracking system. A combination of in-detail simulation (considering every possible sky condition throughout a year) and multi-objective optimization (considering indoor visual and thermal comfort as well as the view to the outside), which was validated by field measurement, resulted in the optimum ADS for the local dwellings in Sydney, Australia. An approximate 62% increase in Daylight Factor, 5% decrease in yearly average heating load, 17% savings in annual artificial lighting energy, and 30% decrease in Predicted Percentage Dissatisfied (PPD) were achieved through optimizing the ADS curve

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Bio-Inspired Multi-Spectral and Polarization Imaging Sensors for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) can enhance cancer treatment by decreasing, and ideally eliminating, positive tumor margins and iatrogenic damage to healthy tissue. Current state-of-the-art near-infrared fluorescence imaging systems are bulky, costly, lack sensitivity under surgical illumination, and lack co-registration accuracy between multimodal images. As a result, an overwhelming majority of physicians still rely on their unaided eyes and palpation as the primary sensing modalities to distinguish cancerous from healthy tissue. In my thesis, I have addressed these challenges in IGC by mimicking the visual systems of several animals to construct low power, compact and highly sensitive multi-spectral and color-polarization sensors. I have realized single-chip multi-spectral imagers with 1000-fold higher sensitivity and 7-fold better spatial co-registration accuracy compared to clinical imaging systems in current use by monolithically integrating spectral tapetal and polarization filters with an array of vertically stacked photodetectors. These imaging sensors yield the unique capabilities of imaging simultaneously color, polarization, and multiple fluorophores for near-infrared fluorescence imaging. Preclinical and clinical data demonstrate seamless integration of this technologies in the surgical work flow while providing surgeons with real-time information on the location of cancerous tissue and sentinel lymph nodes, respectively. Due to its low cost, the bio-inspired sensors will provide resource-limited hospitals with much-needed technology to enable more accurate value-based health care
    corecore