2,480 research outputs found

    Trellis-Based Equalization for Sparse ISI Channels Revisited

    Full text link
    Sparse intersymbol-interference (ISI) channels are encountered in a variety of high-data-rate communication systems. Such channels have a large channel memory length, but only a small number of significant channel coefficients. In this paper, trellis-based equalization of sparse ISI channels is revisited. Due to the large channel memory length, the complexity of maximum-likelihood detection, e.g., by means of the Viterbi algorithm (VA), is normally prohibitive. In the first part of the paper, a unified framework based on factor graphs is presented for complexity reduction without loss of optimality. In this new context, two known reduced-complexity algorithms for sparse ISI channels are recapitulated: The multi-trellis VA (M-VA) and the parallel-trellis VA (P-VA). It is shown that the M-VA, although claimed, does not lead to a reduced computational complexity. The P-VA, on the other hand, leads to a significant complexity reduction, but can only be applied for a certain class of sparse channels. In the second part of the paper, a unified approach is investigated to tackle general sparse channels: It is shown that the use of a linear filter at the receiver renders the application of standard reduced-state trellis-based equalizer algorithms feasible, without significant loss of optimality. Numerical results verify the efficiency of the proposed receiver structure.Comment: To be presented at the 2005 IEEE Int. Symp. Inform. Theory (ISIT 2005), September 4-9, 2005, Adelaide, Australi

    Illumination and Expression Invariant Face Recognition: Toward Sample Quality-based Adaptive Fusion

    Get PDF
    The performance of face recognition schemes is adversely affected as a result of significant to moderate variation in illumination, pose, and facial expressions. Most existing approaches to face recognition tend to deal with one of these problems by controlling the other conditions. Beside strong efficiency requirements, face recognition systems on constrained mobile devices and PDA's are expected to be robust against all variations in recording conditions that arise naturally as a result of the way such devices are used. Wavelet-based face recognition schemes have been shown to meet well the efficiency requirements. Wavelet transforms decompose face images into different frequency subbands at different scales, each giving rise to different representation of the face, and thereby providing the ingredients for a multi-stream approach to face recognition which stand a real chance of achieving acceptable level of robustness. This paper is concerned with the best fusion strategy for a multi-stream face recognition scheme. By investigating the robustness of different wavelet subbands against variation in lighting conditions and expressions, we shall demonstrate the shortcomings of current non-adaptive fusion strategies and argue for the need to develop an image quality based, intelligent, dynamic fusion strategy

    Image quality-based adaptive illumination normalisation for face recognition

    Get PDF
    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired

    Natural solution to antibiotic resistance: bacteriophages ‘The Living Drugs’

    Get PDF
    Antibiotics have been a panacea in animal husbandry as well as in human therapy for decades. The huge amount of antibiotics used to induce the growth and protect the health of farm animals has lead to the evolution of bacteria that are resistant to the drug’s effects. Today, many researchers are working with bacteriophages (phages) as an alternative to antibiotics in the control of pathogens for human therapy as well as prevention, biocontrol, and therapy in animal agriculture. Phage therapy and biocontrol have yet to fulfill their promise or potential, largely due to several key obstacles to their performance. Several suggestions are shared in order to point a direction for overcoming common obstacles in applied phage technology. The key to successful use of phages in modern scientific, farm, food processing and clinical applications is to understand the common obstacles as well as best practices and to develop answers that work in harmony with nature

    Construction of dictionaries to reconstruct high-resolution images for face recognition

    Get PDF
    This paper presents an investigation into the construction of over-complete dictionaries to use in reconstructing a super resolution image from a single input low-resolution image for face recognition at a distance. The ultimate aim is to exploit the recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes that do not require training. Here we shall demonstrate that dictionaries that satisfy the Restricted Isometry Property (RIP) used for CS can achieve face recognition accuracy levels as good as those achieved by dictionaries that are learned from face image databases using elaborate procedures

    Image-Quality-Based Adaptive Face Recognition

    Get PDF
    The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition

    Computer Aided Design of an Electrostatic FIB System

    Get PDF

    Privacy preserving, real-time and location secured biometrics for mCommerce authentication

    Get PDF
    Secure wireless connectivity between mobile devices and financial/commercial establishments is mature, and so is the security of remote authentication for mCommerce. However, the current techniques are open for hacking, false misrepresentation, replay and other attacks. This is because of the lack of real-time and current-precise-location in the authentication process. This paper proposes a new technique that includes freshly-generated real-time personal biometric data of the client and present-position of the mobile device used by the client to perform the mCommerce so to form a real-time biometric representation to authenticate any remote transaction. A fresh GPS fix generates the "time and location" to stamp the biometric data freshly captured to produce a single, real-time biometric representation on the mobile device. A trusted Certification Authority (CA) acts as an independent authenticator of such client's claimed real time location and his/her provided fresh biometric data. Thus eliminates the necessity of user enrolment with many mCommerce services and application providers. This CA can also "independently from the client" and "at that instant of time" collect the client's mobile device "time and location" from the cellular network operator so to compare with the received information, together with the client's stored biometric information. Finally, to preserve the client's location privacy and to eliminate the possibility of cross-application client tracking, this paper proposes shielding the real location of the mobile device used prior to submission to the CA or authenticators

    Enhancing face recognition at a distance using super resolution

    Get PDF
    The characteristics of surveillance video generally include low-resolution images and blurred images. Decreases in image resolution lead to loss of high frequency facial components, which is expected to adversely affect recognition rates. Super resolution (SR) is a technique used to generate a higher resolution image from a given low-resolution, degraded image. Dictionary based super resolution pre-processing techniques have been developed to overcome the problem of low-resolution images in face recognition. However, super resolution reconstruction process, being ill-posed, and results in visual artifacts that can be visually distracting to humans and/or affect machine feature extraction and face recognition algorithms. In this paper, we investigate the impact of two existing super-resolution methods to reconstruct a high resolution from single/multiple low-resolution images on face recognition. We propose an alternative scheme that is based on dictionaries in high frequency wavelet subbands. The performance of the proposed method will be evaluated on databases of high and low-resolution images captured under different illumination conditions and at different distances. We shall demonstrate that the proposed approach at level 3 DWT decomposition has superior performance in comparison to the other super resolution methods

    LBP based on multi wavelet sub-bands feature extraction used for face recognition

    Get PDF
    The strategy of extracting discriminant features from a face image is immensely important to accurate face recognition. This paper proposes a feature extraction algorithm based on wavelets and local binary patterns (LBPs). The proposed method decomposes a face image into multiple sub-bands of frequencies using wavelet transform. Each sub-band in the wavelet domain is divided into non-overlapping sub-regions. Then LBP histograms based on the traditional 8-neighbour sampling points are extracted from the approximation sub-band, whilst 4-neighbour sampling points are used to extract LBPHs from detail sub-bands. Finally, all LBPHs are concatenated into a single feature histogram to effectively represent the face image. Euclidean distance is used to measure the similarity of different feature histograms and the final recognition is performed by the nearest-neighbour classifier. The above strategy was tested on two publicly available face databases (Yale and ORL) using different scenarios and different combination of sub-bands. Results show that the proposed method outperforms the traditional LBP based features
    corecore