1,066 research outputs found
Bio-Inspired Multi-Spectral Image Sensor and Augmented Reality Display for Near-Infrared Fluorescence Image-Guided Surgery
Background: Cancer remains a major public health problem worldwide and poses a huge economic burden. Near-infrared (NIR) fluorescence image-guided surgery (IGS) utilizes molecular markers and imaging instruments to identify and locate tumors during surgical resection. Unfortunately, current state-of-the-art NIR fluorescence imaging systems are bulky, costly, and lack both fluorescence sensitivity under surgical illumination and co-registration accuracy between multimodal images. Additionally, the monitor-based display units are disruptive to the surgical workflow and are suboptimal at indicating the 3-dimensional position of labeled tumors. These major obstacles have prevented the wide acceptance of NIR fluorescence imaging as the standard of care for cancer surgery. The goal of this dissertation is to enhance cancer treatment by developing novel image sensors and presenting the information using holographic augmented reality (AR) display to the physician in intraoperative settings.
Method: By mimicking the visual system of the Morpho butterfly, several single-chip, color-NIR fluorescence image sensors and systems were developed with CMOS technologies and pixelated interference filters. Using a holographic AR goggle platform, an NIR fluorescence IGS display system was developed. Optoelectronic evaluation was performed on the prototypes to evaluate the performance of each component, and small animal models and large animal models were used to verify the overall effectiveness of the integrated systems at cancer detection.
Result: The single-chip bio-inspired multispectral logarithmic image sensor I developed has better main performance indicators than the state-of-the-art NIR fluorescence imaging instruments. The image sensors achieve up to 140 dB dynamic range. The sensitivity under surgical illumination achieves 6108 V/(mW/cm2), which is up to 25 times higher. The signal-to-noise ratio is up to 56 dB, which is 11 dB greater. These enable high sensitivity fluorescence imaging under surgical illumination. The pixelated interference filters enable temperature-independent co-registration accuracy between multimodal images. Pre-clinical trials with small animal model demonstrate that the sensor can achieve up to 95% sensitivity and 94% specificity with tumor-targeted NIR molecular probes. The holographic AR goggle provides the physician with a non-disruptive 3-dimensional display in the clinical setup. This is the first display system that co-registers a virtual image with human eyes and allows video rate image transmission. The imaging system is tested in the veterinary science operating room on canine patients with naturally occurring cancers. In addition, a time domain pulse-width-modulation address-event-representation multispectral image sensor and a handheld multispectral camera prototype are developed.
Conclusion: The major problems of current state-of-the-art NIR fluorescence imaging systems are successfully solved. Due to enhanced performance and user experience, the bio-inspired sensors and augmented reality display system will give medical care providers much needed technology to enable more accurate value-based healthcare
Review of photoacoustic imaging plus X
Photoacoustic imaging (PAI) is a novel modality in biomedical imaging
technology that combines the rich optical contrast with the deep penetration of
ultrasound. To date, PAI technology has found applications in various
biomedical fields. In this review, we present an overview of the emerging
research frontiers on PAI plus other advanced technologies, named as PAI plus
X, which includes but not limited to PAI plus treatment, PAI plus new circuits
design, PAI plus accurate positioning system, PAI plus fast scanning systems,
PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus
deep learning, and PAI plus other imaging modalities. We will discuss each
technology's current state, technical advantages, and prospects for
application, reported mostly in recent three years. Lastly, we discuss and
summarize the challenges and potential future work in PAI plus X area
A Handheld Fine-Grained RFID Localization System with Complex-Controlled Polarization
There is much interest in fine-grained RFID localization systems. Existing
systems for accurate localization typically require infrastructure, either in
the form of extensive reference tags or many antennas (e.g., antenna arrays) to
localize RFID tags within their radio range. Yet, there remains a need for
fine-grained RFID localization solutions that are in a compact, portable,
mobile form, that can be held by users as they walk around areas to map them,
such as in retail stores, warehouses, or manufacturing plants.
We present the design, implementation, and evaluation of POLAR, a portable
handheld system for fine-grained RFID localization. Our design introduces two
key innovations that enable robust, accurate, and real-time localization of
RFID tags. The first is complex-controlled polarization (CCP), a mechanism for
localizing RFIDs at all orientations through software-controlled polarization
of two linearly polarized antennas. The second is joint tag discovery and
localization (JTDL), a method for simultaneously localizing and reading tags
with zero-overhead regardless of tag orientation. Building on these two
techniques, we develop an end-to-end handheld system that addresses a number of
practical challenges in self-interference, efficient inventorying, and
self-localization. Our evaluation demonstrates that POLAR achieves a median
accuracy of a few centimeters in each of the x/y/z dimensions in practical
indoor environments
Quanta Burst Photography
Single-photon avalanche diodes (SPADs) are an emerging sensor technology
capable of detecting individual incident photons, and capturing their
time-of-arrival with high timing precision. While these sensors were limited to
single-pixel or low-resolution devices in the past, recently, large (up to 1
MPixel) SPAD arrays have been developed. These single-photon cameras (SPCs) are
capable of capturing high-speed sequences of binary single-photon images with
no read noise. We present quanta burst photography, a computational photography
technique that leverages SPCs as passive imaging devices for photography in
challenging conditions, including ultra low-light and fast motion. Inspired by
recent success of conventional burst photography, we design algorithms that
align and merge binary sequences captured by SPCs into intensity images with
minimal motion blur and artifacts, high signal-to-noise ratio (SNR), and high
dynamic range. We theoretically analyze the SNR and dynamic range of quanta
burst photography, and identify the imaging regimes where it provides
significant benefits. We demonstrate, via a recently developed SPAD array, that
the proposed method is able to generate high-quality images for scenes with
challenging lighting, complex geometries, high dynamic range and moving
objects. With the ongoing development of SPAD arrays, we envision quanta burst
photography finding applications in both consumer and scientific photography.Comment: A version with better-quality images can be found on the project
webpage: http://wisionlab.cs.wisc.edu/project/quanta-burst-photography
- …