4,068 research outputs found
A micropower centroiding vision processor
Published versio
The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM)
We report a chip-scale lensless wide-field-of-view microscopy imaging technique, subpixel perspective sweeping microscopy, which can render microscopy images of growing or confluent cell cultures autonomously. We demonstrate that this technology can be used to build smart Petri dish platforms, termed ePetri, for cell culture experiments. This technique leverages the recent broad and cheap availability of high performance image sensor chips to provide a low-cost and automated microscopy solution. Unlike the two major classes of lensless microscopy methods, optofluidic microscopy and digital in-line holography microscopy, this new approach is fully capable of working with cell cultures or any samples in which cells may be contiguously connected. With our prototype, we demonstrate the ability to image samples of area 6 mm × 4 mm at 660-nm resolution. As a further demonstration, we showed that the method can be applied to image color stained cell culture sample and to image and track cell culture growth directly within an incubator. Finally, we showed that this method can track embryonic stem cell differentiations over the entire sensor surface. Smart Petri dish based on this technology can significantly streamline and improve cell culture experiments by cutting down on human labor and contamination risks
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Non-line-of-sight tracking of people at long range
A remote-sensing system that can determine the position of hidden objects has
applications in many critical real-life scenarios, such as search and rescue
missions and safe autonomous driving. Previous work has shown the ability to
range and image objects hidden from the direct line of sight, employing
advanced optical imaging technologies aimed at small objects at short range. In
this work we demonstrate a long-range tracking system based on single laser
illumination and single-pixel single-photon detection. This enables us to track
one or more people hidden from view at a stand-off distance of over 50~m. These
results pave the way towards next generation LiDAR systems that will
reconstruct not only the direct-view scene but also the main elements hidden
behind walls or corners
A Survey of Positioning Systems Using Visible LED Lights
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe
A preliminary experiment definition for video landmark acquisition and tracking
Six scientific objectives/experiments were derived which consisted of agriculture/forestry/range resources, land use, geology/mineral resources, water resources, marine resources and environmental surveys. Computer calculations were then made of the spectral radiance signature of each of 25 candidate targets as seen by a satellite sensor system. An imaging system capable of recognizing, acquiring and tracking specific generic type surface features was defined. A preliminary experiment definition and design of a video Landmark Acquisition and Tracking system is given. This device will search a 10-mile swath while orbiting the earth, looking for land/water interfaces such as coastlines and rivers
- …