1,662 research outputs found

    The NASA SBIR product catalog

    Get PDF
    The purpose of this catalog is to assist small business firms in making the community aware of products emerging from their efforts in the Small Business Innovation Research (SBIR) program. It contains descriptions of some products that have advanced into Phase 3 and others that are identified as prospective products. Both lists of products in this catalog are based on information supplied by NASA SBIR contractors in responding to an invitation to be represented in this document. Generally, all products suggested by the small firms were included in order to meet the goals of information exchange for SBIR results. Of the 444 SBIR contractors NASA queried, 137 provided information on 219 products. The catalog presents the product information in the technology areas listed in the table of contents. Within each area, the products are listed in alphabetical order by product name and are given identifying numbers. Also included is an alphabetical listing of the companies that have products described. This listing cross-references the product list and provides information on the business activity of each firm. In addition, there are three indexes: one a list of firms by states, one that lists the products according to NASA Centers that managed the SBIR projects, and one that lists the products by the relevant Technical Topics utilized in NASA's annual program solicitation under which each SBIR project was selected

    Applications of Silicon Retinas: from Neuroscience to Computer Vision

    Full text link
    Traditional visual sensor technology is firmly rooted in the concept of sequences of image frames. The sequence of stroboscopic images in these "frame cameras" is very different compared to the information running from the retina to the visual cortex. While conventional cameras have improved in the direction of smaller pixels and higher frame rates, the basics of image acquisition have remained the same. Event-based vision sensors were originally known as "silicon retinas" but are now widely called "event cameras." They are a new type of vision sensors that take inspiration from the mechanisms developed by nature for the mammalian retina and suggest a different way of perceiving the world. As in the neural system, the sensed information is encoded in a train of spikes, or so-called events, comparable to the action potential generated in the nerve. Event-based sensors produce sparse and asynchronous output that represents in- formative changes in the scene. These sensors have advantages in terms of fast response, low latency, high dynamic range, and sparse output. All these char- acteristics are appealing for computer vision and robotic applications, increasing the interest in this kind of sensor. However, since the sensor’s output is very dif- ferent, algorithms applied for frames need to be rethought and re-adapted. This thesis focuses on several applications of event cameras in scientific scenarios. It aims to identify where they can make the difference compared to frame cam- eras. The presented applications use the Dynamic Vision Sensor (event camera developed by the Sensors Group of the Institute of Neuroinformatics, University of Zurich and ETH). To explore some applications in more extreme situations, the first chapters of the thesis focus on the characterization of several advanced versions of the standard DVS. The low light condition represents a challenging situation for every vision sensor. Taking inspiration from standard Complementary Metal Oxide Semiconductor (CMOS) technology, the DVS pixel performances in a low light scenario can be improved, increasing sensitivity and quantum efficiency, by using back-side illumination. This thesis characterizes the so-called Back Side Illumination DAVIS (BSI DAVIS) camera and shows results from its application in calcium imaging of neural activity. The BSI DAVIS has shown better performance in the low light scene due to its high Quantum Efficiency (QE) of 93% and proved to be the best type of technology for microscopy application. The BSI DAVIS allows detecting fast dynamic changes in neural fluorescent imaging using the green fluorescent calcium indicator GCaMP6f. Event camera advances have pushed the exploration of event-based cameras in computer vision tasks. Chapters of this thesis focus on two of the most active research areas in computer vision: human pose estimation and hand gesture classification. Both chapters report the datasets collected to achieve the task, fulfilling the continuous need for data for this kind of new technology. The Dynamic Vision Sensor Human Pose dataset (DHP19) is an extensive collection of 33 whole-body human actions from 17 subjects. The chapter presents the first benchmark neural network model for 3D pose estimation using DHP19. The network archives a mean error of less than 8 mm in the 3D space, which is comparable with frame-based Human Pose Estimation (HPE) methods using frames. The gesture classification chapter reports an application running on a mobile device and explores future developments in the direction of embedded portable low power devices for online processing. The sparse output from the sensor suggests using a small model with a reduced number of parameters and low power consumption. The thesis also describes pilot results from two other scientific imaging applica- tions for raindrop size measurement and laser speckle analysis presented in the appendices

    Spectral LADAR: Active Range-Resolved Imaging Spectroscopy

    Get PDF
    Imaging spectroscopy using ambient or thermally generated optical sources is a well developed technique for capturing two dimensional images with high per-pixel spectral resolution. The per-pixel spectral data is often a sufficient sampling of a material's backscatter spectrum to infer chemical properties of the constituent material to aid in substance identification. Separately, conventional LADAR sensors use quasi-monochromatic laser radiation to create three dimensional images of objects at high angular resolution, compared to RADAR. Advances in dispersion engineered photonic crystal fibers in recent years have made high spectral radiance optical supercontinuum sources practical, enabling this study of Spectral LADAR, a continuous polychromatic spectrum augmentation of conventional LADAR. This imaging concept, which combines multi-spectral and 3D sensing at a physical level, is demonstrated with 25 independent and parallel LADAR channels and generates point cloud images with three spatial dimensions and one spectral dimension. The independence of spectral bands is a key characteristic of Spectral LADAR. Each spectral band maintains a separate time waveform record, from which target parameters are estimated. Accordingly, the spectrum computed for each backscatter reflection is independently and unambiguously range unmixed from multiple target reflections that may arise from transmission of a single panchromatic pulse. This dissertation presents the theoretical background of Spectral LADAR, a shortwave infrared laboratory demonstrator system constructed as a proof-of-concept prototype, and the experimental results obtained by the prototype when imaging scenes at stand off ranges of 45 meters. The resultant point cloud voxels are spectrally classified into a number of material categories which enhances object and feature recognition. Experimental results demonstrate the physical level combination of active backscatter spectroscopy and range resolved sensing to produce images with a level of complexity, detail, and accuracy that is not obtainable with data-level registration and fusion of conventional imaging spectroscopy and LADAR. The capabilities of Spectral LADAR are expected to be useful in a range of applications, such as biomedical imaging and agriculture, but particularly when applied as a sensor in unmanned ground vehicle navigation. Applications to autonomous mobile robotics are the principal motivators of this study, and are specifically addressed

    Roadmap on signal processing for next generation measurement systems

    Get PDF
    Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System

    Artificial Intelligence in Classical and Quantum Photonics

    Get PDF
    The last decades saw a huge rise of artificial intelligence (AI) as a powerful tool to boost industrial and scientific research in a broad range of fields. AI and photonics are developing a promising two-way synergy: on the one hand, AI approaches can be used to control a number of complex linear and nonlinear photonic processes, both in the classical and quantum regimes; on the other hand, photonics can pave the way for a new class of platforms to accelerate AI-tasks. This review provides the reader with the fundamental notions of machine learning (ML) and neural networks (NNs) and presents the main AI applications in the fields of spectroscopy and chemometrics, computational imaging (CI), wavefront shaping and quantum optics. The review concludes with an overview of future developments of the promising synergy between AI and photonics

    Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    Get PDF
    Coherent imaging systems offer unique benefits to system operators in terms of resolving power, range gating, selective illumination and utility for applications where passively illuminated targets have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity of the partially coherent light at the imaging detector. The estimator is initially constructed using a shift-invariant system model, and is later extended to the case of a shift-variant optical system by the addition of a transfer function term that quantifies optical blur for wide fields-of-view and atmospheric conditions. The estimators are evaluated using both synthetically generated imagery, as well as experimentally collected image data from an outdoor optical range. The research is extended to consider the effects of weighted frame averaging for the individual short-exposure frames collected by the imaging system. It was found that binary weighting of ensemble frames significantly increases spatial resolution
    corecore