2,662 research outputs found

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    An Indoor Navigation System Using a Sensor Fusion Scheme on Android Platform

    Get PDF
    With the development of wireless communication networks, smart phones have become a necessity for people’s daily lives, and they meet not only the needs of basic functions for users such as sending a message or making a phone call, but also the users’ demands for entertainment, surfing the Internet and socializing. Navigation functions have been commonly utilized, however the navigation function is often based on GPS (Global Positioning System) in outdoor environments, whereas a number of applications need to navigate indoors. This paper presents a system to achieve high accurate indoor navigation based on Android platform. To do this, we design a sensor fusion scheme for our system. We divide the system into three main modules: distance measurement module, orientation detection module and position update module. We use an efficient way to estimate the stride length and use step sensor to count steps in distance measurement module. For orientation detection module, in order to get the optimal result of orientation, we then introduce Kalman filter to de-noise the data collected from different sensors. In the last module, we combine the data from the previous modules and calculate the current location. Results of experiments show that our system works well and has high accuracy in indoor situations

    Proceedings of the 2018 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The Proceeding of the annual joint workshop of the Fraunhofer IOSB and the Vision and Fusion Laboratory (IES) 2018 of the KIT contain technical reports of the PhD-stundents on the status of their research. The discussed topics ranging from computer vision and optical metrology to network security and machine learning. This volume provides a comprehensive and up-to-date overview of the research program of the IES Laboratory and the Fraunhofer IOSB

    Helicopter flights with night-vision goggles: Human factors aspects

    Get PDF
    Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques

    FPGA design and implementation of a framework for optogenetic retinal prosthesis

    Get PDF
    PhD ThesisThere are 285 million people worldwide with a visual impairment, 39 million of whom are completely blind and 246 million partially blind, known as low vision patients. In the UK and other developed countries of the west, retinal dystrophy diseases represent the primary cause of blindness, especially Age Related Macular Degeneration (AMD), diabetic retinopathy and Retinitis Pigmentosa (RP). There are various treatments and aids that can help these visual disorders, such as low vision aids, gene therapy and retinal prosthesis. Retinal prostheses consist of four main stages: the input stage (Image Acquisition), the high level processing stage (Image preparation and retinal encoding), low level processing stage (Stimulation controller) and the output stage (Image displaying on the opto-electronic micro-LEDs array). Up to now, a limited number of full hardware implementations have been available for retinal prosthesis. In this work, a photonic stimulation controller was designed and implemented. The main rule of this controller is to enhance framework results in terms of power and time. It involves, first, an even power distributor, which was used to evenly distribute the power through image sub-frames, to avoid a large surge of power, especially with large arrays. Therefore, the overall framework power results are improved. Second, a pulse encoder was used to select different modes of operation for the opto-electronic micro-LEDs array, and as a result of this the overall time for the framework was improved. The implementation is completed using reconfigurable hardware devices, i.e. Field Programmable Gate Arrays (FPGAs), to achieve high performance at an economical price. Moreover, this FPGA-based framework for an optogenetic retinal prosthesis aims to control the opto-electronic micro-LED array in an efficient way, and to interface and link between the opto-electronic micro-LED array hardware architecture and the previously developed high level retinal prosthesis image processing algorithms.University of Jorda

    Advanced sensors technology survey

    Get PDF
    This project assesses the state-of-the-art in advanced or 'smart' sensors technology for NASA Life Sciences research applications with an emphasis on those sensors with potential applications on the space station freedom (SSF). The objectives are: (1) to conduct literature reviews on relevant advanced sensor technology; (2) to interview various scientists and engineers in industry, academia, and government who are knowledgeable on this topic; (3) to provide viewpoints and opinions regarding the potential applications of this technology on the SSF; and (4) to provide summary charts of relevant technologies and centers where these technologies are being developed

    On-line quality control in polymer processing using hyperspectral imaging

    Get PDF
    L’industrie du plastique se tourne de plus en plus vers les matériaux composites afin d’économiser de la matière et/ou d’utiliser des matières premières à moindres coûts, tout en conservant de bonnes propriétés. L’impressionnante adaptabilité des matériaux composites provient du fait que le manufacturier peut modifier le choix des matériaux utilisés, la proportion selon laquelle ils sont mélangés, ainsi que la méthode de mise en œuvre utilisée. La principale difficulté associée au développement de ces matériaux est l’hétérogénéité de composition ou de structure, qui entraîne généralement des défaillances mécaniques. La qualité des prototypes est normalement mesurée en laboratoire, à partir de tests destructifs et de méthodes nécessitant la préparation des échantillons. La mesure en-ligne de la qualité permettrait une rétroaction quasi-immédiate sur les conditions d’opération des équipements, en plus d’être directement utilisable pour le contrôle de la qualité dans une situation de production industrielle. L’objectif de la recherche proposée consiste à développer un outil de contrôle de qualité pour la qualité des matériaux plastiques de tout genre. Quelques sondes de type proche infrarouge ou ultrasons existent présentement pour la mesure de la composition en-ligne, mais celles-ci ne fournissent qu’une valeur ponctuelle à chaque acquisition. Ce type de méthode est donc mal adapté pour identifier la distribution des caractéristiques de surface de la pièce (i.e. homogénéité, orientation, dispersion). Afin d’atteindre cet objectif, un système d’imagerie hyperspectrale est proposé. À l’aide de cet appareil, il est possible de balayer la surface de la pièce et d’obtenir une image hyperspectrale, c’est-à-dire une image formée de l’intensité lumineuse à des centaines de longueurs d’onde et ce, pour chaque pixel de l’image. L’application de méthodes chimiométriques permettent ensuite d’extraire les caractéristiques spatiales et spectrales de l’échantillon présentes dans ces images. Finalement, les méthodes de régression multivariée permettent d’établir un modèle liant les caractéristiques identifiées aux propriétés de la pièce. La construction d’un modèle mathématique forme donc l’outil d’analyse en-ligne de la qualité des pièces qui peut également prédire et optimiser les conditions de fabrication.The use of plastic composite materials has been increasing in recent years in order to reduce the amount of material used and/or use more economic materials, all of which without compromising the properties. The impressive adaptability of these composite materials comes from the fact that the manufacturer can choose the raw materials, the proportion in which they are blended as well as the processing conditions. However, these materials tend to suffer from heterogeneous compositions and structures, which lead to mechanical weaknesses. Product quality is generally measured in the laboratory, using destructive tests often requiring extensive sample preparation. On-line quality control would allow near-immediate feedback on the operating conditions and may be transferrable to an industrial production context. The proposed research consists of developing an on-line quality control tool adaptable to plastic materials of all types. A number of infrared and ultrasound probes presently exist for on-line composition estimation, but only provide single-point values at each acquisition. These methods are therefore less adapted for identifying the spatial distribution of a sample’s surface characteristics (e.g. homogeneity, orientation, dispersion). In order to achieve this objective, a hyperspectral imaging system is proposed. Using this tool, it is possible to scan the surface of a sample and obtain a hyperspectral image, that is to say an image in which each pixel captures the light intensity at hundreds of wavelengths. Chemometrics methods can then be applied to this image in order to extract the relevant spatial and spectral features. Finally, multivariate regression methods are used to build a model between these features and the properties of the sample. This mathematical model forms the backbone of an on-line quality assessment tool used to predict and optimize the operating conditions under which the samples are processed

    “Design, Development and Characterization of a Thermal Sensor Brick System for Modular Robotics

    Get PDF
    This thesis presents the work on thermal imaging sensor brick (TISB) system for modular robotics. The research demonstrates the design, development and characterization of the TISB system. The TISB system is based on the design philosophy of sensor bricks for modular robotics. In under vehicle surveillance for threat detection, which is a target application of this work we have demonstrated the advantages of the TISB system over purely vision-based systems. We have highlighted the advantages of the TISB system as an illumination invariant threat detection system for detecting hidden threat objects in the undercarriage of a car. We have compared the TISB system to the vision sensor brick system and the mirror on a stick. We have also illustrated the operational capability of the system on the SafeBot under vehicle robot to acquire and transmit the data wirelessly. The early designs of the TISB system, the evolution of the designs and the uniformity achieved while maintaining the modularity in building the different sensor bricks; the visual, the thermal and the range sensor brick is presented as part of this work. Each of these sensor brick systems designed and implemented at the Imaging Robotics and Intelligent Systems (IRIS) laboratory consist of four major blocks: Sensing and Image Acquisition Block, Pre-Processing and Fusion Block, Communication Block, and Power Block. The Sensing and Image Acquisition Block is to capture images or acquire data. The Pre-Processing and Fusion Block is to work on the acquired images or data. The Communication Block is for transferring data between the sensor brick and the remote host computer. The Power Block is to maintain power supply to the entire brick. The modular sensor bricks are self-sufficient plug and play systems. The SafeBot under vehicle robot designed and implemented at the IRIS laboratory has two tracked platforms one on each side with a payload bay area in the middle. Each of these tracked platforms is a mobility brick based on the same design philosophy as the modular sensor bricks. The robot can carry one brick at a time or even multiple bricks at the same time. The contributions of this thesis are: (1) designing and developing the hardware implementation of the TISB system, (2) designing and developing the software for the TISB system, and (3) characterizing the TISB system, where this characterization of the system is the major contribution of this thesis. The analysis of the thermal sensor brick system provides the user and future designers with sufficient information on parameters to be considered to make the right choice for future modifications, the kind of applications the TISB could handle and the load that the different blocks of the TISB system could manage. Under vehicle surveillance for threat detection, perimeter / area surveillance, scouting, and improvised explosive device (IED) detection using a car-mounted system are some of the applications that have been identified for this system

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty
    • …
    corecore