2,380 research outputs found

    Color space analysis for iris recognition

    Get PDF
    This thesis investigates issues related to the processing of multispectral and color infrared images of the iris. When utilizing the color bands of the electromagnetic spectrum, the eye color and the components of texture (luminosity and chromaticity) must be considered. This work examines the effect of eye color on texture-based iris recognition in both the near-IR and visible bands. A novel score level fusion algorithm for multispectral iris recognition is presented in this regard. The fusion algorithm - based on evidence that matching performance of a texture-based encoding scheme is impacted by the quality of texture within the original image - ranks the spectral bands of the image based on texture quality and designs a fusion rule based on these rankings. Color space analysis, to determine an optimal representation scheme, is also examined in this thesis. Color images are transformed from the sRGB color space to the CIE Lab, YCbCr, CMYK and HSV color spaces prior to encoding and matching. Also, enhancement methods to increase the contrast of the texture within the iris, without altering the chromaticity of the image, are discussed. Finally, cross-band matching is performed to illustrate the correlation between eye color and specific bands of the color image

    Novel Approach for Detection and Removal of Moving Cast Shadows Based on RGB, HSV and YUV Color Spaces

    Get PDF
    Cast shadow affects computer vision tasks such as image segmentation, object detection and tracking since objects and shadows share the same visual motion characteristics. This unavoidable problem decreases video surveillance system performance. The basic idea of this paper is to exploit the evidence that shadows darken the surface which they are cast upon. For this reason, we propose a simple and accurate method for detection of moving cast shadows based on chromatic properties in RGB, HSV and YUV color spaces. The method requires no a priori assumptions regarding the scene or lighting source. Starting from a normalization step, we apply canny filter to detect the boundary between self-shadow and cast shadow. This treatment is devoted only for the first sequence. Then, we separate between background and moving objects using an improved version of Gaussian mixture model. In order to remove these unwanted shadows completely, we use three change estimators calculated according to the intensity ratio in HSV color space, chromaticity properties in RGB color space, and brightness ratio in YUV color space. Only pixels that satisfy threshold of the three estimators are labeled as shadow and will be removed. Experiments carried out on various video databases prove that the proposed system is robust and efficient and can precisely remove shadows for a wide class of environment and without any assumptions. Experimental results also show that our approach outperforms existing methods and can run in real-time systems

    Melioration of color calibration, goal detection and self-localization systems of NAO humanoid robots

    Get PDF
    Selle lõputöö teemaks on autonoomsete robotite jalgpalli tarkvara arendamine.Vaatluse all on teemad nagu värvide kalibreerimine, objetkituvastus ja lokaliseerimine. Uus YUV värviruumi põhine automaatne värvide kalibreerimine on pakutud. Esitatakse detailne kirjeldus automaatse värvide kalibreerimise algoritmi implemenmtreerimisest koos visuaalsete näidetega, mis illustreerivad algoritmi toimimist. Samuti räägitakse täpsemalt muutustest, mis on implementeeritud väravate tuvastamise moodulis ja põhjustest nende muudatuste taga, andes hea ülevaate objekti tuvastamise algoritmi loogikast. Kirjeldatakse hetkel kasutatavat lokaliseerimissüsteemi ja pakutakse välja ning seletatakse lokaliseerimissüsteem parandamise tehnikat.In this thesis, work regarding to autonomous robot soccer software development is presented. The work covers color calibration, object detection and robot localization topics. Novel YUV color space based method for the automation of color calibration is proposed. Detailed description of automatic color calibration technique implementation is provided along with the visual results illustrating performance of the method. Changes implemented to the goal detection module and motivation behind them are described in detail, providing good overview of the logic of the object recognition algorithm. Utilised localisation system is also described and, finally, the localization system enhancement technique is proposed and explained

    콘볼루션 신경망의 색상 위계 학습에 대한 탐구

    Get PDF
    학위논문 (석사) -- 서울대학교 대학원 : 인문대학 협동과정 인지과학전공, 2020. 8. 장병탁.Empirical evidence suggests that color categories emerge in a universal, recurrent, hierarchical pattern across different cultures in the following order; white, black < red < green, yellow < blue < brown < pink, gray, orange, and purple}. This pattern is referred to as the "Color Hierarchy". Over two experiments, the present study examines whether there is evidence for such hierarchical color category learning patterns in Convolutional Neural Networks (CNNs). Experiment A investigates whether color categories are learned randomly, or in a fixed, hierarchical fashion. Results show that colors higher up the Color Hierarchy (e.g. red) are generally learned before colors lower down the hierarchy (e.g. brown, orange, gray). Experiment B examines whether object color affects recall in object detection. Similar to Experiment A, results show that object recall is noticeably impacted by color, with colors higher up the Color Hierarchy generally showing better recall. Additionally, objects whose color can be described by adjectives that emphasise colorfulness (e.g. vivid, brilliant, deep) show better recall than those which de-emphasise colorfulness (e.g. dark, pale, light). The effect of both color hue and adjective on object recall is still observable, even when controlling for contrast through grayscale images. These results highlight similarities between humans and CNNs in color perception, and provide insight into factors that influence object detection. They also show the value of using deep learning techniques as a means of investigating cognitive universalities in an efficient, unbiased, cost-effective way.경험적으로 색상 계열은 보편적으로 다양한 문화권에 걸쳐 순환적이고 위계적인 패턴을 나타내며 그 순서는 다음과 같이 나타난다; 검은색 < 붉은색 <녹색 < 노란색 < 파란색 < 갈색 < 분홍색, 회색, 주황색, 보라색}. 이러한 경향은 "색상 위계(Color Hierarchy)"라 불린다. 소개될 두 가지 실험을 통해 본 연구는 콘볼루션 신경망의 경우에도 색상 위계 순서에 따른 색상계열 학습이 진행되는지 알아본다. 실험 A는 색상 계열이 무작위로 학습되는지 위계적인 순서를 통해 학습되는지 알아본다. 실험의 결과를 통해 색상 위계상으로 더 상위의 색상(예: 붉은색)은 일반적으로 하위의 색상들 (예: 갈색, 주황색, 회색) 보다 앞서 학습이 이뤄짐을 볼 수 있다. 실험 B는 색상 위계에 따른 학습 편차가 객체인식 학습의 재현률(recall)에도 영향을 끼치는지 알아본다. 실험 A에서와같이 색상 위계는 객체인식 재현률에도 큰 영향을 끼친다. 추가적으로 색상을 강조하는 부사(예: 선명한, 눈에 띄는, 짙은)와 함께 묘사된 객체의 경우에는 반대로 색상을 억제하는 부사(예: 어두운, 옅은, 엷은)와 함께 묘사된 객체들보다 재현률이 높게 나타난다. 부사와 색상의 효과는 흑백 이미지들에 대해서도 여전히 관측된다. 이와 같은 결과들은 사람과 콘볼루션 신경망의 색상 지각과정의 유사성을 보여주며 객체 인식에 영향을 주는 요인들에 대한 통찰력을 제공한다. 또한 이 결과들은 딥러닝 방법이 인지과정의 보편성을 살피는 데에 효율적이고, 치우치지 않으며, 경제적인 방법임을 지시한다.Chapter 1 Introduction 1 1.1 Is Color Categorization Random? 2 1.2 Modelling the Color Hierarchy 4 1.3 Convolutional Neural Networks and Color Learning 5 1.4 Hypotheses 6 Chapter 2 Datasets 8 2.1 Basic Color Dataset 8 2.2 Modanet 8 2.2.1 Color Annotating Process 9 Chapter 3 Color Space 12 3.1 Opponent Color Space 12 3.2 Luminance Color Spaces 13 Chapter 4 Experiment A: CNN Color Classification Recall Ex-periment 15 4.1 Model 15 4.2 Method 17 4.3 Results 17 Chapter 5 Experiment B: Faster R-CNN Colored Clothing Recall Experiment 21 5.1 Model 21 5.2 Method 22 5.3 Results 23 Chapter 6 Discussion and Conclusion 28 References 33 국문초록 36 Acknowledgements 37Maste

    Boundary, Brightness, and Depth Interactions During Preattentive Representation and Attentive Recognition of Figure and Ground

    Full text link
    This article applies a recent theory of 3-D biological vision, called FACADE Theory, to explain several percepts which Kanizsa pioneered. These include 3-D pop-out of an occluding form in front of an occluded form, leading to completion and recognition of the occluded form; 3-D transparent and opaque percepts of Kanizsa squares, with and without Varin wedges; and interactions between percepts of illusory contours, brightness, and depth in response to 2-D Kanizsa images. These explanations clarify how a partially occluded object representation can be completed for purposes of object recognition, without the completed part of the representation necessarily being seen. The theory traces these percepts to neural mechanisms that compensate for measurement uncertainty and complementarity at individual cortical processing stages by using parallel and hierarchical interactions among several cortical processing stages. These interactions are modelled by a Boundary Contour System (BCS) that generates emergent boundary segmentations and a complementary Feature Contour System (FCS) that fills-in surface representations of brightness, color, and depth. The BCS and FCS interact reciprocally with an Object Recognition System (ORS) that binds BCS boundary and FCS surface representations into attentive object representations. The BCS models the parvocellular LGN→Interblob→Interstripe→V4 cortical processing stream, the FCS models the parvocellular LGN→Blob→Thin Stripe→V4 cortical processing stream, and the ORS models inferotemporal cortex.Air Force Office of Scientific Research (F49620-92-J-0499); Defense Advanced Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    How Is a Moving Target Continuously Tracked Behind Occluding Cover?

    Full text link
    Office of Naval Research (N00014-95-1-0657, N00014-95-1-0409
    corecore