22 research outputs found
On the Classification of Gasoline-fuelled Engine Exhaust Fume Related Faults Using Electronic Nose and Principal Component Analysis
The efficiency and effectiveness of every equipment or system is of paramount concern to both the manufacturers and theend users, which necessitates equipment condition monitoring schemes. Intelligent fault diagnosis system using patternrecognition tools can be developed from the result of the condition monitoring. A prototype electronic nose that uses array ofbroadly tuned Taguchi metal oxide sensors was used to carry out condition monitoring of automobile engine using itsexhaust fumes with principal component analysis (PCA) as pattern recognition tool for diagnosing some exhaust relatedfaults. The results showed that the following automobile engine faults; plug-not-firing faults and loss of compression faultswere diagnosable from the automobile exhaust fumes very well with average classification accuracy of 91%.Key words: Electronic nose, Condition Monitoring, Automobile, Fault, Diagnosis, PCA
Iris feature extraction: a survey
Biometric as a technology has been proved to be a reliable means of enforcing constraint in a security sensitiveenvironment. Among the biometric technologies, iris recognition system is highly accurate and reliable becauseof their stable characteristics throughout lifetime. Iris recognition is one of the biometric identification thatemploys pattern recognition technology with the use of high resolution camera. Iris recognition consist of manysections among which feature extraction is an important stage. Extraction of iris features is very important andmust be successfully carried out before iris signature is stored as a template. This paper gives a comprehensivereview of different fundamental iris feature extraction methods, and some other methods available in literatures.It also gives a summarised form of performance accuracy of available algorithms. This establishes a platform onwhich future research on iris feature extraction algorithm(s) as a component of iris recognition system can bebased.Keywords: biometric authentication, false acceptance rate (FAR), false rejection rate (FRR), feature extraction,iris recognition system
Application of Computer Graphics Technique to Computer System Assembling
Computer graphics is the representation and manipulation of image data by a computer using various technology to create and manipulate images (Shirley et.al., 2005). The development of computer graphics has made computer easier to interact with, and better for understanding and interpreting different types of data. Three-Dimensional (3D) computer graphics represent geometric data that is stored in the computer for the purpose of performing calculations and rendering 2D images which may be for lateral display or for real-time viewing. In this work, 3D computer graphic software is used to produce a model of a real - life assembling of computer devices into a full-blown desktop computer. The work is presented in a video viewing format tat will facilitate independent coupling of systems through a ‘watch-and-fix’ paradigm. Keywords: 2D, 3D, IDE, Assembling, Photo-realistic, Data-visualization, Rasterization
A MODEL FOR PREVENTIVE CONGESTION CONTROL MECHANISM IN ATM NETWORKS
Maximizing bandwidth utilization and providing performance guarantees, in the context of multimedia networking, are two incompatible goals. Heterogeneity of the multimedia sources calls for effective traffic control schemes to satisfy their diverse Quality of Service (Qos) requirement. These include admission control at connection set up, traffic control at the source end and efficient scheduling schemes at the switches. The emphasis in this paper is on traffic control at both connection set up and source end. A model for the Connection Admission Control (CAC) is proposed using probabilistic technique. Mathematical formulas are derived Cell Loss Probability (CLP), violation probability (PV) and cell throughput (TC). The performances at two UPC models (fluid flow and approximation) are investigated using the leaky bucket (LB) algorithm. The CLP, PV, and TC performed for different traffic sources which are characterized by their mean bit rate, peak bit rate and average number of bits generated during the burst. The results of the simulation show that the model for the Connection Admission Control (CAC) performs satisfactorily well for different traffic sources. Also, both models for the leaky bucket are almost coincident in policing the peak rate and mean rate of the source. Hence, policing effect is improved considerably using the proposed model
BLACKFACE SURVEILLANCE CAMERA DATABASE FOR EVALUATING FACE RECOGNITION IN LOW QUALITY SCENARIOS
Many face recognition algorithms perform poorly in real life surveillance scenarios because they were tested with datasets that are already biased with high quality images and certain ethnic or racial types. In this paper a black face surveillance camera (BFSC) database was described, which was collected from four low quality cameras and a professional camera. There were fifty (50) random volunteers and 2,850 images were collected for the frontal mugshot, surveillance (visible light), surveillance (IR night vision), and pose variations datasets, respectively. Images were taken at distance 3.4, 2.4, and 1.4 metres from the camera, while the pose variation images were taken at nine distinct pose angles with an increment of 22.5 degrees to the left and right of the subject. Three Face Recognition Algorithms (FRA), a commercially available Luxand SDK, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were evaluated for performance comparison in low quality scenarios. Results obtained show that camera quality (resolution), face-to-camera distance, average recognition time, lighting conditions and pose variations all affect the performance of FRAs. Luxand SDK, PCA and LDA returned an overall accuracy of 97.5%, 93.8% and 92.9% after categorizing the BFSC images into excellent, good and acceptable quality scales.
Facial Image Verification and Quality Assessment System -FaceIVQA
Although several techniques have been proposed for predicting biometric system performance using quality values, many of the research works were based on no-reference assessment technique using a single quality attribute measured directly from the data. These techniques have proved to be inappropriate for facial verification scenarios and inefficient because no single quality attribute can sufficient measure the quality of a facial image. In this research work, a facial image verification and quality assessment framework (FaceIVQA) was developed. Different algorithms and methods were implemented in FaceIVQA to extract the faceness, pose, illumination, contrast and similarity quality attributes using an objective full-reference image quality assessment approach. Structured image verification experiments were conducted on the surveillance camera (SCface) database to collect individual quality scores and algorithm matching scores from FaceIVQA using three recognition algorithms namely principal component analysis (PCA), linear discriminant analysis (LDA) and a commercial recognition SDK. FaceIVQA produced accurate and consistent facial image assessment data. The Result shows that it accurately assigns quality scores to probe image samples. The resulting quality score can be assigned to images captured for enrolment or recognition and can be used as an input to quality-driven biometric fusion systems.DOI:http://dx.doi.org/10.11591/ijece.v3i6.503
DEVELOPMENT OF AN IMPACT ASSESSMENT ALGORITHM FOR THE ADOPTION OF INFORMATION AND COMMUNICATION TECHNOLOGY IN BASIC EDUCATION USING CROSS-IMPACT METHOD
In many countries, the adoption of Information and Communication Technology (ICT) in basic education has been continuously linked to higher efficiency, productivity, and educational outcomes, including quality of cognitive, creative and innovative thinking. This paper focuses on the development of an impact assessment algorithm for evaluating the adoption of ICT in basic education using Cross-impact method. A questionnaire on adoption of ICT in basic education was designed based on Government Policy (GP), Teacher Competency (TC), Availability of ICT infrastructure (IF), Integration of ICT in school curriculum by Ministry of Education (MC), Student preparedness in adopting ICT in learning process (SC) and Perception of schools’ management in adoption of ICT in schools (MI), which are the six major events considered. The questionnaire was administered to experts in basic education within the selected South-Western states of Nigeria (Oyo, Lagos and Ekiti). Experts’ opinions from the administered questionnaires were quantitatively analysed using descriptive statistic in Statistical Package for Social Sciences. The results obtained from the analysis of questionnaires were used to derive the Initial Probability (InitProb) and generate the Conditional Probability Matrices (CondProbMatrices) for occurrence and non-occurrence of the six events under consideration. The impact assessment algorithm was developed such that its starting instructions would determine the consistence of the InitProb and the CondProbMatrices using the three fundamental laws of probability calculus (Normalization, Product and Addition rules). These are followed by sequential instructions which would determine the occurrence of each event in the CondProbMatrices. Then, through repetitive instructions, each event would be selected at random and its occurrence and non-occurrence would be determined using a random number generator. The last group of instructions would successively determine the impact of each event on other alternative events. Thus, the developed impact assessment algorithm could replace the existing user perspective method of evaluating the adoption of ICT in basic education
FACIAL EXPRESSION RECOGNITION BASED ON CULTURAL PARTICLE SWAMP OPTIMIZATION AND SUPPORT VECTOR MACHINE
Facial expressions remain a significant component of human-to-human interface and have the potential to play a correspondingly essential part in human-computer interaction. Support Vector Machine (SVM) by the virtue of its application in a various domain such as bioinformatics, pattern recognition, and other nonlinear problems has a very good generalization capability. However, various studies have proven that its performance drops when applied to problems with large complexities. It consumes a large amount of memory and time when the number of dataset increases. Optimization of SVM parameter can influence and improve its performance.Therefore, a Culture Particle Swarm Optimization (CPSO) techniques is developed to improve the performance of SVM in the facial expression recognition system. CPSO is a hybrid of Cultural Algorithm (CA) and Particle Swarm Optimization (PSO). Six facial expression images each from forty individuals were locally acquired. One hundred and seventy five images were used for training while the remaining sixty five images were used for testing purpose. The results showed a training time of 16.32 seconds, false positive rate of 0%, precision of 100% and an overall accuracy of 92.31% at 250 by 250 pixel resolution. The results obtained establish that CPSO-SVM technique is computational efficient with better precision, accuracy, false positive rate and can construct efficient and realistic facial expression feature that would produce a more reliable security surveillance system in any security prone organization
DEVELOPMENT OF A MODIFIED PARTICLE SWARM OPTIMIZATION BASED CULTURAL ALGORITHM FOR SOLVING UNIVERSITY TIMETABLING PROBLEM
Timetabling problems are search problems in which courses must be arranged around a set of timeslots so that some constraints are satisfied. However, slow convergence speed and high computational complexity are one of drawbacks limiting the efficiency of the existing timetabling algorithms. In this paper, a Modified Particle Swarm Optimization based Cultural Algorithm which is characterized with low computational complexity and high convergence speed was developed for solving university lecture timetabling problems. The standard Particle Swarm Optimization (PSO) algorithm was modified by introducing influence factors and acceleration component in order to improve the converge speed of the algorithm. Cultural algorithm was formulated by incorporating the Modified Particle Swarm Optimization (MPSO) into its population space. Thus, the developed Modified Particle Swarm Optimization based Cultural Algorithm could be implemented and employed for solving lecture timetabling problems in higher institutions
DEVELOPMENT OF A MODIFIED CLONAL SELECTION ALGORITHM FOR FEATURE LEVEL FUSION OF MULTIBIOMETRIC SYSTEMS
Feature level fusion is the combination of biometric information contained in the extracted features of biometric images. However, feature-balance maintenance and high computational complexity are one of the major problems encountered when fusion is done at feature level. Therefore, in this paper, a Modified Clonal Selection Algorithm (MCSA) which is characterized by feature-balance maintenance capability and low computational complexity was developed for feature level fusion of multibiometric systems.The standard Tournament Selection Method (TSM) was modified by performing tournaments among neighbours rather than by random selection to reduce the between-group selection pressure associated with the standard TSM. Clonal Selection algorithm was formulated by incorporating the Modified Tournament Selection Method (MTSM) into its selection phase. The modified algorithm could be employed for feature level fusion of multibiometric systems