309 research outputs found

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    QUAD FLAT NO-LEAD (QFN) DEVICE FAULTY DETECTION USING GABOR WAVELETS

    Get PDF
    Computer vision inspection system using image processing algorithms have been utilized by many manufacturing companies as a method of quality control. Since manufacturing industries comprise of many types of products, various image processing algorithms have been developed to suit different type of outputting products. In this paper, we explored Gabor wavelet feature extraction as a method for vision inspection. Unlike conventional vision inspection system which require manual human configuration of inspection algorithms, our experiment uses Gabor wavelets to fractionate the image into distinctive scales and orientations. Through chi-square distance computation, the physical quality of Quad Flan No-Lead (QFN) device can be distinguished by computing the dissimilarity of the test image with the trained database, thus eliminating the weakness of human errors in configuration of vision systems. We performed our algorithm testing using 64 real-world production images obtained from a 0.3 megapixel monochromatic industrial smart vision camera. The images consists a mixture of physically good and defected QFN units. The proposed algorithm achieved 98.46% accuracy rate with the average processing time of 0.457 seconds per image

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Sense–Analyze–Respond–Actuate (SARA) Paradigm: Proof of Concept System Spanning Nanoscale and Macroscale Actuation for Detection of Escherichia coli in Aqueous Media

    Get PDF
    Foodborne pathogens are a major concern for public health. We demonstrate for the first time a partially automated sensing system for rapid (~17 min), label-free impedimetric detection of Escherichia coli spp. in food samples (vegetable broth) and hydroponic media (aeroponic lettuce system) based on temperature-responsive poly(N-isopropylacrylamide) (PNIPAAm) nanobrushes. This proof of concept (PoC) for the Sense-Analyze-Respond-Actuate (SARA) paradigm uses a biomimetic nanostructure that is analyzed and actuated with a smartphone. The bio-inspired soft material and sensing mechanism is inspired by binary symbiotic systems found in nature, where low concentrations of bacteria are captured from complex matrices by brush actuation driven by concentration gradients at the tissue surface. To mimic this natural actuation system, carbon-metal nanohybrid sensors were fabricated as the transducer layer, and coated with PNIPAAm nanobrushes. The most effective coating and actuation protocol for E. coli detection at various temperatures above/below the critical solution temperature of PNIPAAm was determined using a series of electrochemical experiments. After analyzing nanobrush actuation in stagnant media, we developed a flow through system using a series of pumps that are triggered by electrochemical events at the surface of the biosensor. SARA PoC may be viewed as a cyber-physical system that actuates nanomaterials using smartphone-based electroanalytical testing of samples. This study demonstrates thermal actuation of polymer nanobrushes to detect (sense) bacteria using a cyber-physical systems (CPS) approach. This PoC may catalyze the development of smart sensors capable of actuation at the nanoscale (stimulus-response polymer) and macroscale (non-microfluidic pumping)

    Computational Intelligence and Complexity Measures for Chaotic Information Processing

    Get PDF
    This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization
    corecore