593 research outputs found

    Bio-Inspired Multi-Spectral Imaging Sensors and Algorithms for Image Guided Surgery

    Get PDF
    Image guided surgery (IGS) utilizes emerging imaging technologies to provide additional structural and functional information to the physician in clinical settings. This additional visual information can help physicians delineate cancerous tissue during resection as well as avoid damage to near-by healthy tissue. Near-infrared (NIR) fluorescence imaging (700 nm to 900 nm wavelengths) is a promising imaging modality for IGS, namely for the following reasons: First, tissue absorption and scattering in the NIR window is very low, which allows for deeper imaging and localization of tumor tissue in the range of several millimeters to a centimeter depending on the tissue surrounding the tumor. Second, spontaneous tissue fluorescence emission is minimal in the NIR region, allowing for high signal-to-background ratio imaging compared to visible spectrum fluorescence imaging. Third, decoupling the fluorescence signal from the visible spectrum allows for optimization of NIR fluorescence while attaining high quality color images. Fourth, there are two FDA approved fluorescent dyes in the NIR region—namely methylene blue (MB) and indocyanine green—which can help to identify tumor tissue due to passive accumulation in human subjects. The aforementioned advantages have led to the development of NIR fluorescence imaging systems for a variety of clinical applications, such as sentinel lymph node imaging, angiography, and tumor margin assessment. With these technological advances, secondary surgeries due to positive tumor margins or damage to healthy organs can be largely mitigated, reducing the emotional and financial toll on the patient. Currently, several NIR fluorescence imaging systems (NFIS) are available commercially or are undergoing clinical trials, such as FLARE, SPY, PDE, Fluobeam, and others. These systems capture multi-spectral images using complex optical equipment and are combined with real-time image processing to present an augmented view to the surgeon. The information is presented on a standard monitor above the operating bed, which requires the physician to stop the surgical procedure and look up at the monitor. The break in the surgical flow sometimes outweighs the benefits of fluorescence based IGS, especially in time-critical surgical situations. Furthermore, these instruments tend to be very bulky and have a large foot print, which significantly complicates their adoption in an already crowded operating room. In this document, I present the development of a compact and wearable goggle system capable of real-time sensing of both NIR fluorescence and color information. The imaging system is inspired by the ommatidia of the monarch butterfly, in which pixelated spectral filters are integrated with light sensitive elements. The pixelated spectral filters are fabricated via a carefully optimized nanofabrication procedure and integrated with a CMOS imaging array. The entire imaging system has been optimized for high signal-to-background fluorescence imaging using an analytical approach, and the efficacy of the system has been experimentally verified. The bio-inspired spectral imaging sensor is integrated with an FPGA for compact and real-time signal processing and a wearable goggle for easy integration in the operating room. The complete imaging system is undergoing clinical trials at Washington University in the St. Louis Medical School for imaging sentinel lymph nodes in both breast cancer patients and melanoma patients

    There and Back Again: Self-supervised Multispectral Correspondence Estimation

    Full text link
    Across a wide range of applications, from autonomous vehicles to medical imaging, multi-spectral images provide an opportunity to extract additional information not present in color images. One of the most important steps in making this information readily available is the accurate estimation of dense correspondences between different spectra. Due to the nature of cross-spectral images, most correspondence solving techniques for the visual domain are simply not applicable. Furthermore, most cross-spectral techniques utilize spectra-specific characteristics to perform the alignment. In this work, we aim to address the dense correspondence estimation problem in a way that generalizes to more than one spectrum. We do this by introducing a novel cycle-consistency metric that allows us to self-supervise. This, combined with our spectra-agnostic loss functions, allows us to train the same network across multiple spectra. We demonstrate our approach on the challenging task of dense RGB-FIR correspondence estimation. We also show the performance of our unmodified network on the cases of RGB-NIR and RGB-RGB, where we achieve higher accuracy than similar self-supervised approaches. Our work shows that cross-spectral correspondence estimation can be solved in a common framework that learns to generalize alignment across spectra

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    Multi-modal video analysis for early fire detection

    Get PDF
    In dit proefschrift worden verschillende aspecten van een intelligent videogebaseerd branddetectiesysteem onderzocht. In een eerste luik ligt de nadruk op de multimodale verwerking van visuele, infrarood en time-of-flight videobeelden, die de louter visuele detectie verbetert. Om de verwerkingskost zo minimaal mogelijk te houden, met het oog op real-time detectie, is er voor elk van het type sensoren een set ’low-cost’ brandkarakteristieken geselecteerd die vuur en vlammen uniek beschrijven. Door het samenvoegen van de verschillende typen informatie kunnen het aantal gemiste detecties en valse alarmen worden gereduceerd, wat resulteert in een significante verbetering van videogebaseerde branddetectie. Om de multimodale detectieresultaten te kunnen combineren, dienen de multimodale beelden wel geregistreerd (~gealigneerd) te zijn. Het tweede luik van dit proefschrift focust zich hoofdzakelijk op dit samenvoegen van multimodale data en behandelt een nieuwe silhouet gebaseerde registratiemethode. In het derde en tevens laatste luik van dit proefschrift worden methodes voorgesteld om videogebaseerde brandanalyse, en in een latere fase ook brandmodellering, uit te voeren. Elk van de voorgestelde technieken voor multimodale detectie en multi-view lokalisatie zijn uitvoerig getest in de praktijk. Zo werden onder andere succesvolle testen uitgevoerd voor de vroegtijdige detectie van wagenbranden in ondergrondse parkeergarages

    Nevada Test Site-Directed Research and Development: FY 2006 Report

    Full text link

    Stereo Vision: A Comparison of Synthetic Imagery vs. Real World Imagery for the Automated Aerial Refueling Problem

    Get PDF
    Missions using unmanned aerial vehicles have increased in the past decade. Currently, there is no way to refuel these aircraft. Accomplishing automated aerial refueling can be made possible using the stereo vision system on a tanker. Real world experiments for the automated aerial refueling problem are expensive and time consuming. Currently, simulations performed in a virtual world have shown promising results using computer vision. It is possible to use the virtual world as a substitute environment for the real world. This research compares the performance of stereo vision algorithms on synthetic and real world imagery

    Optical Tracking and Spectral Characterization of Cubesats for Operational Missions

    Get PDF
    Orbital debris in low Earth orbit is of growing concern to operational satellites from the government and commercial sector. With an uptick in worldwide satellite launches and the growing adoption of the CubeSat standard, the number of small objects in orbit are increasing at a faster pace than ever. As a result, a cascading collision event seems inevitable in the near future. The United States Strategic Command tracks and determines the orbit of resident space objects using a worldwide network of radar and optical sensors. However, in order to better protect space assets, there has been increased interest in not just knowing where a space object is, but what the object is. The optical and spectral characteristics of solar light reflected off of satellites or debris can provide information on the physical state or identity of the object. These same optical signatures can be used for mission support of operational satellite missions- down to satellites as small as CubeSats. Optical observation of CubeSats could provide independent monitoring of spin rate, deployable status, identification of individual CubeSats in a swarm, or possibly attitude information. This thesis first introduces the reader to a review of available observation techniques followed by the basics of observational astronomy relevant to satellite tracking. The thesis then presents the OSCOM system- a system for Optical tracking and Spectral characterization of CubeSats for Operational Missions. OSCOM is a ground-based system capable of observing and characterizing small debris and CubeSats with commercially available optical telescopes and detectors. The system is just as applicable for larger satellites which have higher signal to noise ratio. The OSCOM system has been used to successfully collect time-series photometry of more than 60 unique satellites of all sizes. Selected photometry results are presented along with a discussion of the technical details required for optical observation of small satellites

    Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery

    Get PDF
    Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works

    Enhanced cyberspace defense with real-time distributed systems using covert channel publish-subscribe broker pattern communications

    Get PDF
    In this thesis, we propose a novel cyberspace defense solution to the growing sophistication of threats facing networks within the Department of Defense. Current network defense strategies, including traditional intrusion detection and firewall-based perimeter defenses, are ineffective against increasingly sophisticated social engineering attacks such as spear-phishing which exploit individuals with targeted information. These asymmetric attacks are able to bypass current network defense technologies allowing adversaries extended and often unrestricted access to portions of the enterprise. Network defense strategies are hampered by solutions favoring network-centric designs which disregard the security requirements of the specific data and information on the networks. Our solution leverages specific technology characteristics from traditional network defense systems and real-time distributed systems using publish-subscribe broker patterns to form the foundation of a full-spectrum cyber operations capability. Building on this foundation, we present the addition of covert channel communications within the distributed systems framework to protect sensitive Command and Control and Battle Management messaging from adversary intercept and exploitation. Through this combined approach, DoD and Service network defense professionals will be able to meet sophisticated cyberspace threats head-on while simultaneously protecting the data and information critical to warfighting Commands, Services and Agencies.http://archive.org/details/enhancedcyberspa109454049US Air Force (USAF) author.Approved for public release; distribution is unlimited
    • …
    corecore