9,980 research outputs found
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns
We introduce Deep Thermal Imaging, a new approach for close-range automatic
recognition of materials to enhance the understanding of people and ubiquitous
technologies of their proximal environment. Our approach uses a low-cost mobile
thermal camera integrated into a smartphone to capture thermal textures. A deep
neural network classifies these textures into material types. This approach
works effectively without the need for ambient light sources or direct contact
with materials. Furthermore, the use of a deep learning network removes the
need to handcraft the set of features for different materials. We evaluated the
performance of the system by training it to recognise 32 material types in both
indoor and outdoor environments. Our approach produced recognition accuracies
above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584
images of 17 outdoor materials. We conclude by discussing its potentials for
real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing
System
- …