5,070 research outputs found
A local algorithm for the computation of image velocity via constructive interference of global Fourier components
A novel Fourier-based technique for local motion detection from image sequences is proposed. In this method, the instantaneous velocities of local image points are inferred directly from the global 3D Fourier components of the image sequence. This is done by selecting those velocities for which the superposition of the corresponding Fourier gratings leads to constructive interference at the image point. Hence, image velocities can be assigned locally even though position is computed from the phases and amplitudes of global Fourier components (spanning the whole image sequence) that have been filtered based on the motion-constraint equation, reducing certain aperture effects typically arising from windowing in other methods. Regularization is introduced for sequences having smooth flow fields. Aperture effects and their effect on optic-flow regularization are investigated in this context. The algorithm is tested on both synthetic and real image sequences and the results are compared to those of other local methods. Finally, we show that other motion features, i.e. motion direction, can be computed using the same algorithmic framework without requiring an intermediate representation of local velocity, which is an important characteristic of the proposed method.Postprint (author’s final draft
Recommended from our members
Lensfree computational microscopy tools for cell and tissue imaging at the point-of-care and in low-resource settings.
The recent revolution in digital technologies and information processing methods present important opportunities to transform the way optical imaging is performed, particularly toward improving the throughput of microscopes while at the same time reducing their relative cost and complexity. Lensfree computational microscopy is rapidly emerging toward this end, and by discarding lenses and other bulky optical components of conventional imaging systems, and relying on digital computation instead, it can achieve both reflection and transmission mode microscopy over a large field-of-view within compact, cost-effective and mechanically robust architectures. Such high throughput and miniaturized imaging devices can provide a complementary toolset for telemedicine applications and point-of-care diagnostics by facilitating complex and critical tasks such as cytometry and microscopic analysis of e.g., blood smears, Pap tests and tissue samples. In this article, the basics of these lensfree microscopy modalities will be reviewed, and their clinically relevant applications will be discussed
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
py4DSTEM: a software package for multimodal analysis of four-dimensional scanning transmission electron microscopy datasets
Scanning transmission electron microscopy (STEM) allows for imaging,
diffraction, and spectroscopy of materials on length scales ranging from
microns to atoms. By using a high-speed, direct electron detector, it is now
possible to record a full 2D image of the diffracted electron beam at each
probe position, typically a 2D grid of probe positions. These 4D-STEM datasets
are rich in information, including signatures of the local structure,
orientation, deformation, electromagnetic fields and other sample-dependent
properties. However, extracting this information requires complex analysis
pipelines, from data wrangling to calibration to analysis to visualization, all
while maintaining robustness against imaging distortions and artifacts. In this
paper, we present py4DSTEM, an analysis toolkit for measuring material
properties from 4D-STEM datasets, written in the Python language and released
with an open source license. We describe the algorithmic steps for dataset
calibration and various 4D-STEM property measurements in detail, and present
results from several experimental datasets. We have also implemented a simple
and universal file format appropriate for electron microscopy data in py4DSTEM,
which uses the open source HDF5 standard. We hope this tool will benefit the
research community, helps to move the developing standards for data and
computational methods in electron microscopy, and invite the community to
contribute to this ongoing, fully open-source project
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Spectral Generalized Multi-Dimensional Scaling
Multidimensional scaling (MDS) is a family of methods that embed a given set
of points into a simple, usually flat, domain. The points are assumed to be
sampled from some metric space, and the mapping attempts to preserve the
distances between each pair of points in the set. Distances in the target space
can be computed analytically in this setting. Generalized MDS is an extension
that allows mapping one metric space into another, that is, multidimensional
scaling into target spaces in which distances are evaluated numerically rather
than analytically. Here, we propose an efficient approach for computing such
mappings between surfaces based on their natural spectral decomposition, where
the surfaces are treated as sampled metric-spaces. The resulting spectral-GMDS
procedure enables efficient embedding by implicitly incorporating smoothness of
the mapping into the problem, thereby substantially reducing the complexity
involved in its solution while practically overcoming its non-convex nature.
The method is compared to existing techniques that compute dense correspondence
between shapes. Numerical experiments of the proposed method demonstrate its
efficiency and accuracy compared to state-of-the-art approaches
- …