625 research outputs found

    Kajian motivasi ekstrinsik di antara Pelajar Lepasan Sijil dan Diploma Politeknik Jabatan Kejuruteraan Awam KUiTTHO

    Get PDF
    Kajian ini dijalankan untuk menyelidiki pengaruh dorongan keluarga, cara pengajaran pensyarah, pengaruh rakan sebaya dan kemudahan infrastruktur terhadap motivasi ekstrinsik bagi pelajar tahun tiga dan tahun empat lepasan sijil dan diploma politeknik Jabatan Kejuruteraan Awain Kolej Universiti Teknologi Tun Hussein Onn. Sampel kajian ini beijumlah 87 orang bagi pelajar lepasan sijil politeknik dan 38 orang bagi lepasan diploma politeknik. Data kajian telah diperolehi melalui borang soal selidik dan telah dianalisis menggunakan perisian SPSS (Statical Package For Sciences). Hasil kajian telah dipersembahkan dalam bentuk jadual dan histohgrapi. Analisis kajian mendapati bahawa kedua-dua kumpulan setuju bahawa faktor-faktor di atas memberi kesan kepada motivasi ekstrinsik mereka. Dengan kata lain faktpr-faktor tersebut penting dalam membentuk pelajar mencapai kecemerlangan akademik

    The Study and Literature Review of a Feature Extraction Mechanism in Computer Vison

    Get PDF
    Detecting the Features in the image is a challenging task in computer vison and numerous image processing applications. For example to detect the corners in an image there exists numerous algorithms. Corners are formed by combining multiple edges and which sometimes may not define the boundary of an image. This paper is mainly concentrates on the study of the Harris corner detection algorithm which accurately detects the corners exists in the image. The Harris corner detector is a widely used interest point detector due to strong features such as rotation, scale, illumination and in the case of noise. It is based on the local auto-correlation function of a signal; where the local auto-correlation function measures the local changes of the signal with patches shifted by a small amount in di?erent directions. In out experiments we have shown the results for gray scale images as well as for color images which gives the results for the individual regions present in the image. This algorithm is more reliable than the conventional methods

    Region-Based Watermarking of Biometric Images: Case Study in Fingerprint Images

    Get PDF
    In this paper, a novel scheme to watermark biometric images is proposed. It exploits the fact that biometric images, normally, have one region of interest, which represents the relevant part of information processable by most of the biometric-based identification/authentication systems. This proposed scheme consists of embedding the watermark into the region of interest only; thus, preserving the hidden data from the segmentation process that removes the useless background and keeps the region of interest unaltered; a process which can be used by an attacker as a cropping attack. Also, it provides more robustness and better imperceptibility of the embedded watermark. The proposed scheme is introduced into the optimum watermark detection in order to improve its performance. It is applied to fingerprint images, one of the most widely used and studied biometric data. The watermarking is assessed in two well-known transform domains: the discrete wavelet transform (DWT) and the discrete Fourier transform (DFT). The results obtained are very attractive and clearly show significant improvements when compared to the standard technique, which operates on the whole image. The results also reveal that the segmentation (cropping) attack does not affect the performance of the proposed technique, which also shows more robustness against other common attacks

    Parameter optimization for local polynomial approximation based intersection confidence interval filter using genetic algorithm: an application for brain MRI image de-noising

    Get PDF
    Magnetic resonance imaging (MRI) is extensively exploited for more accuratepathological changes as well as diagnosis. Conversely, MRI suffers from variousshortcomings such as ambient noise from the environment, acquisition noise from theequipment, the presence of background tissue, breathing motion, body fat, etc.Consequently, noise reduction is critical as diverse types of the generated noise limit the efficiency of the medical image diagnosis. Local polynomial approximation basedintersection confidence interval (LPA-ICI) filter is one of the effective de-noising filters.This filter requires an adjustment of the ICI parameters for efficient window size selection.From the wide range of ICI parametric values, finding out the best set of tunes values is itselfan optimization problem. The present study proposed a novel technique for parameteroptimization of LPA-ICI filter using genetic algorithm (GA) for brain MR imagesde-noising. The experimental results proved that the proposed method outperforms theLPA-ICI method for de-noising in terms of various performance metrics for different noisevariance levels. Obtained results reports that the ICI parameter values depend on the noisevariance and the concerned under test image

    Advancing the technology of sclera recognition

    Get PDF
    PhD ThesisEmerging biometric traits have been suggested recently to overcome some challenges and issues related to utilising traditional human biometric traits such as the face, iris, and fingerprint. In particu- lar, iris recognition has achieved high accuracy rates under Near- InfraRed (NIR) spectrum and it is employed in many applications for security and identification purposes. However, as modern imaging devices operate in the visible spectrum capturing colour images, iris recognition has faced challenges when applied to coloured images especially with eye images which have a dark pigmentation. Other issues with iris recognition under NIR spectrum are the constraints on the capturing process resulting in failure-to-enrol, and degradation in system accuracy and performance. As a result, the research commu- nity investigated using other traits to support the iris biometric in the visible spectrum such as the sclera. The sclera which is commonly known as the white part of the eye includes a complex network of blood vessels and veins surrounding the eye. The vascular pattern within the sclera has different formations and layers providing powerful features for human identification. In addition, these blood vessels can be acquired in the visible spectrum and thus can be applied using ubiquitous camera-based devices. As a consequence, recent research has focused on developing sclera recog- nition. However, sclera recognition as any biometric system has issues and challenges which need to be addressed. These issues are mainly related to sclera segmentation, blood vessel enhancement, feature ex- traction, template registration, matching and decision methods. In addition, employing the sclera biometric in the wild where relaxed imaging constraints are utilised has introduced more challenges such as illumination variation, specular reflections, non-cooperative user capturing, sclera blocked region due to glasses and eyelashes, variation in capturing distance, multiple gaze directions, and eye rotation. The aim of this thesis is to address such sclera biometric challenges and highlight the potential of this trait. This also might inspire further research on tackling sclera recognition system issues. To overcome the vii above-mentioned issues and challenges, three major contributions are made which can be summarised as 1) designing an efficient sclera recognition system under constrained imaging conditions which in- clude new sclera segmentation, blood vessel enhancement, vascular binary network mapping and feature extraction, and template registra- tion techniques; 2) introducing a novel sclera recognition system under relaxed imaging constraints which exploits novel sclera segmentation, sclera template rotation alignment and distance scaling methods, and complex sclera features; 3) presenting solutions to tackle issues related to applying sclera recognition in a real-time application such as eye localisation, eye corner and gaze detection, together with a novel image quality metric. The evaluation of the proposed contributions is achieved using five databases having different properties representing various challenges and issues. These databases are the UBIRIS.v1, UBIRIS.v2, UTIRIS, MICHE, and an in-house database. The results in terms of segmen- tation accuracy, Equal Error Rate (EER), and processing time show significant improvement in the proposed systems compared to state- of-the-art methods.Ministry of Higher Education and Scientific Research in Iraq and the Iraqi Cultural Attach´e in Londo

    Iris Identification using Keypoint Descriptors and Geometric Hashing

    Get PDF
    Iris is one of the most reliable biometric trait due to its stability and randomness. Conventional recognition systems transform the iris to polar coordinates and perform well for co-operative databases. However, the problem aggravates to manifold for recognizing non-cooperative irises. In addition, the transformation of iris to polar domain introduces aliasing effect. In this thesis, the aforementioned issues are addressed by considering Noise Independent Annular Iris for feature extraction. Global feature extraction approaches are rendered as unsuitable for annular iris due to change in scale as they could not achieve invariance to ransformation and illumination. On the contrary, local features are invariant to image scaling, rotation and partially invariant to change in illumination and viewpoint. To extract local features, Harris Corner Points are detected from iris and matched using novel Dual stage approach. Harris corner improves accuracy but fails to achieve scale invariance. Further, Scale Invariant Feature Transform (SIFT) has been applied to annular iris and results are found to be very promising. However, SIFT is computationally expensive for recognition due to higher dimensional descriptor. Thus, a recently evolved keypoint descriptor called Speeded Up Robust Features (SURF) is applied to mark performance improvement in terms of time as well as accuracy. For identification, retrieval time plays a significant role in addition to accuracy. Traditional indexing approaches cannot be applied to biometrics as data are unstructured. In this thesis, two novel approaches has been developed for indexing iris database. In the first approach, Energy Histogram of DCT coefficients is used to form a B-tree. This approach performs well for cooperative databases. In the second approach, indexing is done using Geometric Hashing of SIFT keypoints. The latter indexing approach achieves invariance to similarity transformations, illumination and occlusion and performs with an accuracy of more than 98% for cooperative as well as non-cooperative databases

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions
    corecore