248 research outputs found
High-resolution transport-of-intensity quantitative phase microscopy with annular illumination
For quantitative phase imaging (QPI) based on transport-of-intensity equation
(TIE), partially coherent illumination provides speckle-free imaging,
compatibility with brightfield microscopy, and transverse resolution beyond
coherent diffraction limit. Unfortunately, in a conventional microscope with
circular illumination aperture, partial coherence tends to diminish the phase
contrast, exacerbating the inherent noise-to-resolution tradeoff in TIE
imaging, resulting in strong low-frequency artifacts and compromised imaging
resolution. Here, we demonstrate how these issues can be effectively addressed
by replacing the conventional circular illumination aperture with an annular
one. The matched annular illumination not only strongly boosts the phase
contrast for low spatial frequencies, but significantly improves the practical
imaging resolution to near the incoherent diffraction limit. By incorporating
high-numerical aperture (NA) illumination as well as high-NA objective, it is
shown, for the first time, that TIE phase imaging can achieve a transverse
resolution up to 208 nm, corresponding to an effective NA of 2.66. Time-lapse
imaging of in vitro Hela cells revealing cellular morphology and subcellular
dynamics during cells mitosis and apoptosis is exemplified. Given its
capability for high-resolution QPI as well as the compatibility with widely
available brightfield microscopy hardware, the proposed approach is expected to
be adopted by the wider biology and medicine community.Comment: This manuscript was originally submitted on 20 Feb. 201
Visual landmark sequence-based indoor localization
This paper presents a method that uses common objects as landmarks for smartphone-based indoor localization and navigation. First, a topological map marking relative positions of common objects such as doors, stairs and toilets is generated from floor plan. Second, a computer vision technique employing the latest deep learning technology has been developed for detecting common indoor objects from videos captured by smartphone. Third, second order Hidden Markov model is applied to match detected indoor landmark sequence to topological map. We use videos captured by users holding smartphones and walking through corridors of an office building to evaluate our method. The experiment shows that computer vision technique is able to accurately and reliably detect 10 classes of common indoor objects and that second order hidden Markov model can reliably match the detected landmark sequence with the topological map. This work demonstrates that computer vision and machine learning techniques can play a very useful role in developing smartphone-based indoor positioning applications
SLNSpeech: solving extended speech separation problem by the help of sign language
A speech separation task can be roughly divided into audio-only separation
and audio-visual separation. In order to make speech separation technology
applied in the real scenario of the disabled, this paper presents an extended
speech separation problem which refers in particular to sign language assisted
speech separation. However, most existing datasets for speech separation are
audios and videos which contain audio and/or visual modalities. To address the
extended speech separation problem, we introduce a large-scale dataset named
Sign Language News Speech (SLNSpeech) dataset in which three modalities of
audio, visual, and sign language are coexisted. Then, we design a general deep
learning network for the self-supervised learning of three modalities,
particularly, using sign language embeddings together with audio or
audio-visual information for better solving the speech separation task.
Specifically, we use 3D residual convolutional network to extract sign language
features and use pretrained VGGNet model to exact visual features. After that,
an improved U-Net with skip connections in feature extraction stage is applied
for learning the embeddings among the mixed spectrogram transformed from source
audios, the sign language features and visual features. Experiments results
show that, besides visual modality, sign language modality can also be used
alone to supervise speech separation task. Moreover, we also show the
effectiveness of sign language assisted speech separation when the visual
modality is disturbed. Source code will be released in
http://cheertt.top/homepage/Comment: 33 pages, 8 figures, 5 table
Quantifying the effects of hydration on corneal stiffness with optical coherence elastography
Several methods have been proposed to assess changes in corneal biomechanical properties due to various factors, such as degenerative diseases, intraocular pressure, and therapeutic interventions (e.g. corneal collagen crosslinking). However, the effect of the corneal tissue hydration state on corneal stiffness is not well understood. In this work, we induce low amplitude (< 10 μm) elastic waves with a focused micro air-pulse in fresh in situ rabbit corneas (n = 10) in the whole eye-globe configuration at an artificially controlled intraocular pressure. The waves were then detected with a phase-stabilized swept source optical coherence elastography system. Baseline measurements were taken every 20 minutes for an hour while the corneas were hydrated with 1X PBS. After the measurement at 60 minutes, a 20% dextran solution was topically instilled to dehydrate the corneas. The measurements were repeated every 20 minutes again for an hour. The results showed that the elastic wave velocity decreased as the corneal thickness decreased. Finite element modeling (FEM) was performed using the corneal geometry and elastic wave propagation speed to assess the stiffness of the samples. The results show that the stiffness increased from ~430 kPa during hydration with PBS to ~500 kPa after dehydration with dextran, demonstrating that corneal hydration state, apart from geometry and intraocular pressure, can change the stiffness of the cornea
- …