20,144 research outputs found

    Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    Get PDF
    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage

    Postmortem iris recognition and its application in human identification

    Full text link
    Iris recognition is a validated and non-invasive human identification technology currently implemented for the purposes of surveillance and security (i.e. border control, schools, military). Similar to deoxyribonucleic acid (DNA), irises are a highly individualizing component of the human body. Based on a lack of genetic penetrance, irises are unique between an individual’s left and right iris and between identical twins, proving to be more individualizing than DNA. At this time, little to no research has been conducted on the use of postmortem iris scanning as a biometric measurement of identification. The purpose of this pilot study is to explore the use of iris recognition as a tool for postmortem identification. Objectives of the study include determining whether current iris recognition technology can locate and detect iris codes in postmortem globes, and if iris scans collected at different postmortem time intervals can be identified as the same iris initially enrolled. Data from 43 decedents involving 148 subsequent iris scans demonstrated a subsequent match rate of approximately 80%, supporting the theory that iris recognition technology is capable of detecting and identifying an individual’s iris code in a postmortem setting. A chi-square test of independence showed no significant difference between match outcomes and the globe scanned (left vs. right), and gender had no bearing on the match outcome. There was a significant relationship between iris color and match outcome, with blue/gray eyes yielding a lower match rate (59%) compared to brown (82%) or green/hazel eyes (88%), however, the sample size of blue/gray eyes in this study was not large enough to draw a meaningful conclusion. An isolated case involving an antemortem initial scan collected from an individual on life support yielded an accurate identification (match) with a subsequent scan captured at approximately 10 hours postmortem. Falsely rejected subsequent iris scans or "no match" results occurred in about 20% of scans; they were observed at each PMI range and varied from 19-30%. The false reject rate is too high to reliably establish non-identity when used alone and ideally would be significantly lower prior to implementation in a forensic setting; however, a "no match" could be confirmed using another method. Importantly, the data showed a false match rate or false accept rate (FAR) of zero, a result consistent with previous iris recognition studies in living individuals. The preliminary results of this pilot study demonstrate a plausible role for iris recognition in postmortem human identification. Implementation of a universal iris recognition database would benefit the medicolegal death investigation and forensic pathology communities, and has potential applications to other situations such as missing persons and human trafficking cases

    Optical joint correlator for real-time image tracking and retinal surgery

    Get PDF
    A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina

    A biologically inspired spiking model of visual processing for image feature detection

    Get PDF
    To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images

    Live Demonstration: On the distance estimation of moving targets with a Stereo-Vision AER system

    Get PDF
    Distance calculation is always one of the most important goals in a digital stereoscopic vision system. In an AER system this goal is very important too, but it cannot be calculated as accurately as we would like. This demonstration shows a first approximation in this field, using a disparity algorithm between both retinas. The system can make a distance approach about a moving object, more specifically, a qualitative estimation. Taking into account the stereo vision system features, the previous retina positioning and the very important Hold&Fire building block, we are able to make a correlation between the spike rate of the disparity and the distance.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Visual identification by signature tracking

    Get PDF
    We propose a new camera-based biometric: visual signature identification. We discuss the importance of the parameterization of the signatures in order to achieve good classification results, independently of variations in the position of the camera with respect to the writing surface. We show that affine arc-length parameterization performs better than conventional time and Euclidean arc-length ones. We find that the system verification performance is better than 4 percent error on skilled forgeries and 1 percent error on random forgeries, and that its recognition performance is better than 1 percent error rate, comparable to the best camera-based biometrics
    corecore