7 research outputs found

    Калмановская Ρ„ΠΈΠ»ΡŒΡ‚Ρ€Π°Ρ†ΠΈΡ ΠΎΠ΄Π½ΠΎΠ³ΠΎ класса ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ динамичСских ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ²

    Get PDF
    We discuss the problem of estimating the state of a dynamic object by using observed images generated by an optical system. The work aims to implement a novel approach that would ensure improved accuracy of dynamic object tracking using a sequence of images. We utilize a vector model that describes the object image as a limited number of vertexes (reference points). Upon imaging, the object of interest is assumed to be retained at the center of each frame, so that the motion parameters can be considered as projections onto the axes of a coordinate system matched with the camera's optical axis. The novelty of the approach is that the observed parameters (the distance along the optical axis and angular attitude) of the object are calculated using the coordinates of specified points in the object images. For estimating the object condition, a Kalman-Bucy filter is constructed on the assumption that the dynamic object motion is described by a set of equations for the translational motion of the center of mass along the optical axis and variations in the angular attitude relative to the image plane. The efficiency of the proposed method is illustrated by an example of estimating the object's angular attitude.РассматриваСтся Π·Π°Π΄Π°Ρ‡Π° оцСнивания состояния динамичСского ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π° ΠΏΠΎ Π½Π°Π±Π»ΡŽΠ΄Π°Π΅ΠΌΡ‹ΠΌ изобраТСниям, сформированным оптичСской систСмой. ЦСль исслСдования состоит Π² Ρ€Π΅Π°Π»ΠΈΠ·Π°Ρ†ΠΈΠΈ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ…ΠΎΠ΄Π°, ΠΎΠ±Π΅ΡΠΏΠ΅Ρ‡ΠΈΠ²Π°ΡŽΡ‰Π΅Π³ΠΎ ΠΏΠΎΠ²Ρ‹ΡˆΠ΅Π½ΠΈΠ΅ точности Π°Π²Ρ‚ΠΎΠ½ΠΎΠΌΠ½ΠΎΠ³ΠΎ слСТСния Π·Π° динамичСским ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠΌ ΠΏΠΎ ΠΏΠΎΡΠ»Π΅Π΄ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ. Π˜ΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ вСкторная модСль изобраТСния ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π° Π² Π²ΠΈΠ΄Π΅ ΠΎΠ³Ρ€Π°Π½ΠΈΡ‡Π΅Π½Π½ΠΎΠ³ΠΎ количСства Π²Π΅Ρ€ΡˆΠΈΠ½ (Π±Π°Π·ΠΎΠ²Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ). ΠŸΡ€Π΅Π΄ΠΏΠΎΠ»Π°Π³Π°Π΅Ρ‚ΡΡ, Ρ‡Ρ‚ΠΎ Π² процСссС рСгистрации ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ удСрТиваСтся Π² Ρ†Π΅Π½Ρ‚Ρ€Π°Π»ΡŒΠ½ΠΎΠΉ области ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΊΠ°Π΄Ρ€Π°, поэтому ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Ρ‹ двиТСния ΠΌΠΎΠ³ΡƒΡ‚ ΠΎΠΏΠΈΡΡ‹Π²Π°Ρ‚ΡŒΡΡ Π² Π²ΠΈΠ΄Π΅ ΠΏΡ€ΠΎΠ΅ΠΊΡ†ΠΈΠΉ Π½Π° оси систСмы ΠΊΠΎΠΎΡ€Π΄ΠΈΠ½Π°Ρ‚, связанной с оптичСской осью ΠΊΠ°ΠΌΠ΅Ρ€Ρ‹. Новизна ΠΏΠΎΠ΄Ρ…ΠΎΠ΄Π° состоит Π² Ρ‚ΠΎΠΌ, Ρ‡Ρ‚ΠΎ Π½Π°Π±Π»ΡŽΠ΄Π°Π΅ΠΌΡ‹Π΅ ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Ρ‹ (расстояниС вдоль оптичСской оси ΠΈ ΡƒΠ³Π»ΠΎΠ²ΠΎΠ΅ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅) ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π° Π²Ρ‹Ρ‡ΠΈΡΠ»ΡΡŽΡ‚ΡΡ ΠΏΠΎ ΠΊΠΎΠΎΡ€Π΄ΠΈΠ½Π°Ρ‚Π°ΠΌ Π·Π°Π΄Π°Π½Π½Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ Π½Π° изобраТСниях ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π°. Для ΠΎΡ†Π΅Π½ΠΊΠΈ состояний ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π° строится Ρ„ΠΈΠ»ΡŒΡ‚Ρ€ Калмана-Π‘ΡŒΡŽΡΠΈ Π² ΠΏΡ€Π΅Π΄ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠΈ, Ρ‡Ρ‚ΠΎ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠ΅ динамичСского ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π° описываСтся ΡΠΎΠ²ΠΎΠΊΡƒΠΏΠ½ΠΎΡΡ‚ΡŒΡŽ ΡƒΡ€Π°Π²Π½Π΅Π½ΠΈΠΉ ΠΏΠΎΡΡ‚ΡƒΠΏΠ°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠ³ΠΎ двиТСния Ρ†Π΅Π½Ρ‚Ρ€Π° масс вдоль оптичСской оси ΠΈ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ ΡƒΠ³Π»ΠΎΠ²ΠΎΠ³ΠΎ полоТСния ΠΎΡ‚Π½ΠΎΡΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎ плоскости изобраТСния. ΠŸΡ€ΠΈΠ²Π΅Π΄Π΅Π½ ΠΏΡ€ΠΈΠΌΠ΅Ρ€ оцСнивания ΡƒΠ³Π»ΠΎΠ²ΠΎΠ³ΠΎ полоТСния ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Π°, ΠΈΠ»Π»ΡŽΡΡ‚Ρ€ΠΈΡ€ΡƒΡŽΡ‰ΠΈΠΉ Ρ€Π°Π±ΠΎΡ‚ΠΎΡΠΏΠΎΡΠΎΠ±Π½ΠΎΡΡ‚ΡŒ ΠΏΡ€Π΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅Ρ‚ΠΎΠ΄Π°

    Intelligent Interfaces to Empower People with Disabilities

    Full text link
    Severe motion impairments can result from non-progressive disorders, such as cerebral palsy, or degenerative neurological diseases, such as Amyotrophic Lateral Sclerosis (ALS), Multiple Sclerosis (MS), or muscular dystrophy (MD). They can be due to traumatic brain injuries, for example, due to a traffic accident, or to brainste

    A Study of Segmentation and Normalization for Iris Recognition Systems

    Get PDF
    Iris recognition systems capture an image from an individual's eye. The iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. For instance, even an effective feature extraction method would not be able to obtain useful information from an iris image that is not segmented or normalized properly. This thesis is to enhance the performance of segmentation and normalization processes in iris recognition systems to increase the overall accuracy. The previous iris segmentation approaches assume that the boundary of pupil is a circle. However, according to our observation, circle cannot model this boundary accurately. To improve the quality of segmentation, a novel active contour is proposed to detect the irregular boundary of pupil. The method can successfully detect all the pupil boundaries in the CASIA database and increase the recognition accuracy. Most previous normalization approaches employ polar coordinate system to transform iris. Transforming iris into polar coordinates requires a reference point as the polar origin. Since pupil and limbus are generally non-concentric, there are two natural choices, pupil center and limbus center. However, their performance differences have not been investigated so far. We also propose a reference point, which is the virtual center of a pupil with radius equal to zero. We refer this point as the linearly-guessed center. The experiments demonstrate that the linearly-guessed center provides much better recognition accuracy. In addition to evaluating the pupil and limbus centers and proposing a new reference point for normalization, we reformulate the normalization problem as a minimization problem. The advantage of this formulation is that it is not restricted by the circular assumption used in the reference point approaches. The experimental results demonstrate that the proposed method performs better than the reference point approaches. In addition, previous normalization approaches are based on transforming iris texture into a fixed-size rectangular block. In fact, the shape and size of normalized iris have not been investigated in details. In this thesis, we study the size parameter of traditional approaches and propose a dynamic normalization scheme, which transforms an iris based on radii of pupil and limbus. The experimental results demonstrate that the dynamic normalization scheme performs better than the previous approaches

    Improved facial feature fitting for model based coding and animation

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Fitting and tracking of a scene model in very low bit rate video coding

    Get PDF

    Intensity based methodologies for facial expression recognition.

    Get PDF
    by Hok Chun Lo.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 136-143).Abstracts in English and Chinese.LIST OF FIGURES --- p.viiiLIST OF TABLES --- p.xChapter 1. --- INTRODUCTION --- p.1Chapter 2. --- PREVIOUS WORK ON FACIAL EXPRESSION RECOGNITION --- p.9Chapter 2.1. --- Active Deformable Contour --- p.9Chapter 2.2. --- Facial Feature Points and B-spline Curve --- p.10Chapter 2.3. --- Optical Flow Approach --- p.11Chapter 2.4. --- Facial Action Coding System --- p.12Chapter 2.5. --- Neural Network --- p.13Chapter 3. --- EIGEN-ANALYSIS BASED METHOD FOR FACIAL EXPRESSION RECOGNITION --- p.15Chapter 3.1. --- Related Topics on Eigen-Analysis Based Method --- p.15Chapter 3.1.1. --- Terminologies --- p.15Chapter 3.1.2. --- Principal Component Analysis --- p.17Chapter 3.1.3. --- Significance of Principal Component Analysis --- p.18Chapter 3.1.4. --- Graphical Presentation of the Idea of Principal Component Analysis --- p.20Chapter 3.2. --- EigenFace Method for Face Recognition --- p.21Chapter 3.3. --- Eigen-Analysis Based Method for Facial Expression Recognition --- p.23Chapter 3.3.1. --- Person-Dependent Database --- p.23Chapter 3.3.2. --- Direct Adoption of EigenFace Method --- p.24Chapter 3.3.3. --- Multiple Subspaces Method --- p.27Chapter 3.4. --- Detail Description on Our Approaches --- p.29Chapter 3.4.1. --- Database Formation --- p.29Chapter a. --- Conversion of Image to Column Vector --- p.29Chapter b. --- "Preprocess: Scale Regulation, Orientation Regulation and Cropping." --- p.30Chapter c. --- Scale Regulation --- p.31Chapter d. --- Orientation Regulation --- p.32Chapter e. --- Cropping of images --- p.33Chapter f. --- Calculation of Expression Subspace for Direct Adoption Method --- p.35Chapter g. --- Calculation of Expression Subspace for Multiple Subspaces Method. --- p.38Chapter 3.4.2. --- Recognition Process for Direct Adoption Method --- p.38Chapter 3.4.3. --- Recognition Process for Multiple Subspaces Method --- p.39Chapter a. --- Intensity Normalization Algorithm --- p.39Chapter b. --- Matching --- p.44Chapter 3.5. --- Experimental Result and Analysis --- p.45Chapter 4. --- DEFORMABLE TEMPLATE MATCHING SCHEME FOR FACIAL EXPRESSION RECOGNITION --- p.53Chapter 4.1. --- Background Knowledge --- p.53Chapter 4.1.1. --- Camera Model --- p.53Chapter a. --- Pinhole Camera Model and Perspective Projection --- p.54Chapter b. --- Orthographic Camera Model --- p.56Chapter c. --- Affine Camera Model --- p.57Chapter 4.1.2. --- View Synthesis --- p.58Chapter a. --- Technique Issue of View Synthesis --- p.59Chapter 4.2. --- View Synthesis Technique for Facial Expression Recognition --- p.68Chapter 4.2.1. --- From View Synthesis Technique to Template Deformation --- p.69Chapter 4.3. --- Database Formation --- p.71Chapter 4.3.1. --- Person-Dependent Database --- p.72Chapter 4.3.2. --- Model Images Acquisition --- p.72Chapter 4.3.3. --- Templates' Structure and Formation Process --- p.73Chapter 4.3.4. --- Selection of Warping Points and Template Anchor Points --- p.77Chapter a. --- Selection of Warping Points --- p.78Chapter b. --- Selection of Template Anchor Points --- p.80Chapter 4.4. --- Recognition Process --- p.81Chapter 4.4.1. --- Solving Warping Equation --- p.83Chapter 4.4.2. --- Template Deformation --- p.83Chapter 4.4.3. --- Template from Input Images --- p.86Chapter 4.4.4. --- Matching --- p.87Chapter 4.5. --- Implementation of Automation System --- p.88Chapter 4.5.1. --- Kalman Filter --- p.89Chapter 4.5.2. --- Using Kalman Filter for Trakcing in Our System --- p.89Chapter 4.5.3. --- Limitation --- p.92Chapter 4.6. --- Experimental Result and Analysis --- p.93Chapter 5. --- CONCLUSION AND FUTURE WORK --- p.97APPENDIX --- p.100Chapter I. --- Image Sample 1 --- p.100Chapter II. --- Image Sample 2 --- p.109Chapter III. --- Image Sample 3 --- p.119Chapter IV. --- Image Sample 4 --- p.135BIBLIOGRAPHY --- p.13
    corecore