68 research outputs found

    Modelling of chromatic contrast for retrieval of wallpaper images

    Get PDF
    Colour remains one of the key factors in presenting an object and consequently has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour-based image retrieval. To comprehend this effect, in this paper, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing the gap that most of existing colour models lack to fill by taking simultaneous colour contrast into account. Subsequently, the model is applied to the retrieval task on a collection of museum wallpapers of colour-rich images. In comparison with current popular colour models including CIECAM02, HSI, and RGB, with respect to both foreground and background colours, CAMcc appears to outperform the others with retrieved results being closer to query images. In addition, CAMcc focuses more on foreground colours, especially by maintaining the balance between both foreground and background colours, while the rest of existing models take on dominant colours that are perceived the most, usually background tones. Significantly, the contribution of the investigation lies in not only the improvement of the accuracy of colour-based image retrieval, but also the development of colour contrast model that warrants an important place in colour and computer vision theory, leading to deciphering the insight of this age-old topic of chromatic contrast in colour science

    Modelling of colour appearance of textured colours and smartphones using CIECAM02

    Get PDF
    The international colour committee recommended a colour appearance model, CIECAM02 in 2002, to help to predict colours under various viewing conditions from a colour appearance point of view, which has the accuracy of an averaged observer. In this research, an attempt is made to extend this model to predict colours on mobile telephones, which is not covered in the model. Despite the limited size and capacity of a mobile telephone, the urge to apply it to meet quotidian needs has never been unencumbered due to its appealing appearance, versatility, and readiness, such as viewing/taking pictures and shopping online. While a smartphone can act as a mini-computer, it does not always offer the same functionality as a desktop computer. For example, the RGB values on a smartphone normally cannot be modified nor can white balance be checked. As a result, performing online shopping using a mobile telephone can be difficult, especially when buying colour sensitive items. Therefore, this research takes an initiative to investigate the variations of colours for a number of smartphones while making an effort to predict their colour appearance using CIECAM02, benefiting both telephone users and makers. This thesis studies the Apple iPhone 5, LG Nexus 4, Samsung, and Huawei models, and compares their performance with a CRT colour monitor that has been calibrated using the D65 standard, to be consistent with the normal way of viewing online colours. As expected, all the telephones tested present more colourful images than a CRT. Work was also undertaken to investigate colours with a degree of texture. It was found that, on CRT monitors, a colour with a texture appears to be darker but more colourful to a human observer. Linear modifications have been proposed and implemented to the CIECAM02 model to accommodate these textured colours

    Facial analysis in video : detection and recognition

    Get PDF
    Biometric authentication systems automatically identify or verify individuals using physiological (e.g., face, fingerprint, hand geometry, retina scan) or behavioral (e.g., speaking pattern, signature, keystroke dynamics) characteristics. Among these biometrics, facial patterns have the major advantage of being the least intrusive. Automatic face recognition systems thus have great potential in a wide spectrum of application areas. Focusing on facial analysis, this dissertation presents a face detection method and numerous feature extraction methods for face recognition. Concerning face detection, a video-based frontal face detection method has been developed using motion analysis and color information to derive field of interests, and distribution-based distance (DBD) and support vector machine (SVM) for classification. When applied to 92 still images (containing 282 faces), this method achieves 98.2% face detection rate with two false detections, a performance comparable to the state-of-the-art face detection methods; when applied to videQ streams, this method detects faces reliably and efficiently. Regarding face recognition, extensive assessments of face recognition performance in twelve color spaces have been performed, and a color feature extraction method defined by color component images across different color spaces is shown to help improve the baseline performance of the Face Recognition Grand Challenge (FRGC) problems. The experimental results show that some color configurations, such as YV in the YUV color space and YJ in the YIQ color space, help improve face recognition performance. Based on these improved results, a novel feature extraction method implementing genetic algorithms (GAs) and the Fisher linear discriminant (FLD) is designed to derive the optimal discriminating features that lead to an effective image representation for face recognition. This method noticeably improves FRGC ver1.0 Experiment 4 baseline recognition rate from 37% to 73%, and significantly elevates FRGC xxxx Experiment 4 baseline verification rate from 12% to 69%. Finally, four two-dimensional (2D) convolution filters are derived for feature extraction, and a 2D+3D face recognition system implementing both 2D and 3D imaging modalities is designed to address the FRGC problems. This method improves FRGC ver2.0 Experiment 3 baseline performance from 54% to 72%
    • …
    corecore