768 research outputs found

    FPGA-based enhanced probabilistic convergent weightless network for human iris recognition

    Get PDF
    This paper investigates how human identification and identity verification can be performed by the application of an FPGA based weightless neural network, entitled the Enhanced Probabilistic Convergent Neural Network (EPCN), to the iris biometric modality. The human iris is processed for feature vectors which will be employed for formation of connectivity, during learning and subsequent recognition. The pre-processing of the iris, prior to EPCN training, is very minimal. Structural modifications were also made to the Random Access Memory (RAM) based neural network which enhances its robustness when applied in real-time

    FPGA IMPLEMENTATION OF RED ALGORITHM FOR HIGH SPEED PUPIL ISOLATION

    Get PDF
    Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irises of an individual’s eyes, whose complex random patterns are unique and can be seen from some distance. Modern iris recognition algorithms can be computationally intensive, yet are designed for traditional sequential processing elements, such as a personal computer. However, a parallel processing alternative using Field Programmable Gate Array offers an opportunity to speed up iris recognition. Within the means of this project, iris template generation with directional filtering, which is a computationally expensive, yet parallel portion of a modern iris recognition algorithm, is parallelized on an FPGA system. An algorithm that is both accurate and fast in a hardware design that is small and transportable are crucial to the implementation of this tool. As part of an ongoing effort to meet these criteria, this method improves a iris recognition algorithm, namely pupil isolation. A significant speed-up of pupil isolation by implementing this portion of the algorithm on a Field Programmable Gate Array

    STR: a student developed star tracker for the ESA-LED ESMO moon mission

    Get PDF
    In the frame of their engineering degree, ISAE’s students are developing a Star Tracker, with the aim of being the core attitude estimation equipment of the European Moon Student Orbiter. This development goes on since several years and is currently in phase B. We intend to start building an integrated breadboard for the end of the academic year. The STR is composed of several sub-systems: the optical and detection sub-system, the electronics, the mechanics and the software. The optical detection part is based on an in-house developed new generation of APS detectors. The optical train is made of several lenses enclosed in a titanium tube. The electronics includes a FPGA for the pre-processing of the image and a microcontroller in order to manage the high level functions of the instrument. The mechanical part includes the electronics box, as well as the sensor baffle. The design is optimized to minimize the thermo-elastic noise of the assembly. Embedded on ESMO platform, this Star Tracker will be able to compute the satellite‘s attitude, taking into account the specific requirements linked to a Moon mission (illumination, radiation requirements and baffle adaptation to lunar orbit). In order to validate the design, software end-to-end simulation will include a complete simulation of the STR in its lunar dynamic environment. Therefore, we are developing a simple orbital model for the mission (including potential dazzling by celestial bodies)

    Gaze Controlled Human-Computer Interface

    Get PDF
    The goal of the Gaze Controlled Human Computer Interface project is to design and construct a non-invasive gaze-tracking system that will determine where a user is looking on a computer screen in real time. To accomplish this, a fixed illumination source consisting of Infrared (IR) Light Emitting Diodes (LEDs) is used to produce corneal reflections on the user’s eyes. These reflections are captured with a video camera and compared to the relative location of the user’s pupils. From this comparison, a correlation matrix can be created and the approximate location of the screen that the user is looking at can be determined. The final objective is to allow the user to manipulate a cursor on the computer screen simply by looking at different boxes in a grid on the monitor. The project includes design of the hardware setup to provide a suitable environment for glint detection, image processing of the user’s eyes to determine pupil location, the implementation of a probabilistic algorithm to determine an appropriate matrix transformation, and performance analysis on various users

    Hardware-software co-design of an iris recognition algorithm

    Get PDF
    This paper describes the implementation of an iris recognition algorithm based on hardware-software co-design. The system architecture consists of a general-purpose 32- bit microprocessor and several slave coprocessors that accelerate the most intensive calculations. The whole iris recognition algorithm has been implemented on a low-cost Spartan 3 FPGA, achieving significant reduction in execution time when compared to a conventional software-based application. Experimental results show that with a clock speed of 40 MHz, an IrisCode is obtained in less than 523 ms from an image of 640x480 pixels, which is just 20% of the total time needed by a software solution running on the same microprocessor embedded in the architecture.Peer ReviewedPreprin

    FPGA implementation of eye tracker video processing

    Get PDF
    Current wearable video video-based eye tracking technology has considerably strict limitations in size and processing speed. Hardware used to process video must be lightweight, compact and somewhat modular. The video should be processed as it is being collected in real time and accurately enough to generate useful, reliable data. The utilization of general purposegeneral-purpose logic processors makes the implementation of basic pupil detection and eye tracking algorithms simple and compact, but it introduces accuracy issues due to the necessary simplicity of the computational methods used to detect and track the pupil. Large, complicated hardware implementations are more accurate, but they are quickly outdated and unwieldy. Application Application-specific programmable logic devices solve these issues in part since they allow users to synthesize a fast hardware device that realizes complex and robust algorithms. The devices are also capable of being updated or redesigned quickly and easily, as often as needed. When applied to eye tracking, this could act to improve on the accuracy issues and maintain functionality within a small, self-contained physical device. A basic video processing device implementation was attempted within severe functional constraints for thi

    An FPGA-based hardware accelerator for iris segmentation

    Get PDF
    Biometric authentication is becoming an increasingly prevalent way to identify a person based on unique physical traits such as the fingerprint, the face, and/or the iris. The iris stands out particularly among these traits due to its relative invariability with time and high uniqueness. However, iris recognition without special, dedicated tools like near-infrared (NIR) cameras and stationary high-performance computers is a challenge. Solutions have been proposed to target mobile platforms like smart phones and tablets by making use of the RGB camera commonly found on those platforms. These solutions tend to be slower than the former due to the decreased performance achieved in mobile processors. This work details an approach to solve the mobility and performance problems of iris segmentation in current solutions by targeting an FPGA-based SoC. The SoC allows us to run the iris recognition system in software, while accelerating slower parts of the system by using parallel, dedicated hardware modules. The results show a speedup in segmentation 2X when compared to an x86-64 platform and 46X when compared to an ARMv7 platform

    An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    Get PDF
    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations

    Operations of and Future Plans for the Pierre Auger Observatory

    Full text link
    Technical reports on operations and features of the Pierre Auger Observatory, including ongoing and planned enhancements and the status of the future northern hemisphere portion of the Observatory. Contributions to the 31st International Cosmic Ray Conference, Lodz, Poland, July 2009.Comment: Contributions to the 31st ICRC, Lodz, Poland, July 200
    • 

    corecore