7 research outputs found
ΠΠ°Π»ΠΌΠ°Π½ΠΎΠ²ΡΠΊΠ°Ρ ΡΠΈΠ»ΡΡΡΠ°ΡΠΈΡ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΊΠ»Π°ΡΡΠ° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΡ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ²
We discuss the problem of estimating the state of a dynamic object by using observed images generated by an optical system. The work aims to implement a novel approach that would ensure improved accuracy of dynamic object tracking using a sequence of images. We utilize a vector model that describes the object image as a limited number of vertexes (reference points). Upon imaging, the object of interest is assumed to be retained at the center of each frame, so that the motion parameters can be considered as projections onto the axes of a coordinate system matched with the camera's optical axis. The novelty of the approach is that the observed parameters (the distance along the optical axis and angular attitude) of the object are calculated using the coordinates of specified points in the object images. For estimating the object condition, a Kalman-Bucy filter is constructed on the assumption that the dynamic object motion is described by a set of equations for the translational motion of the center of mass along the optical axis and variations in the angular attitude relative to the image plane. The efficiency of the proposed method is illustrated by an example of estimating the object's angular attitude.Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π·Π°Π΄Π°ΡΠ° ΠΎΡΠ΅Π½ΠΈΠ²Π°Π½ΠΈΡ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΎΠ±ΡΠ΅ΠΊΡΠ° ΠΏΠΎ Π½Π°Π±Π»ΡΠ΄Π°Π΅ΠΌΡΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡΠΌ, ΡΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½Π½ΡΠΌ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΠΎΠΉ. Π¦Π΅Π»Ρ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΡΠΎΡΡΠΎΠΈΡ Π² ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π°, ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΠ²Π°ΡΡΠ΅Π³ΠΎ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΠ΅ ΡΠΎΡΠ½ΠΎΡΡΠΈ Π°Π²ΡΠΎΠ½ΠΎΠΌΠ½ΠΎΠ³ΠΎ ΡΠ»Π΅ΠΆΠ΅Π½ΠΈΡ Π·Π° Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΠΌ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠΌ ΠΏΠΎ ΠΏΠΎΡΠ»Π΅Π΄ΠΎΠ²Π°ΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ. ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π²Π΅ΠΊΡΠΎΡΠ½Π°Ρ ΠΌΠΎΠ΄Π΅Π»Ρ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΠΎΠ±ΡΠ΅ΠΊΡΠ° Π² Π²ΠΈΠ΄Π΅ ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Π° Π²Π΅ΡΡΠΈΠ½ (Π±Π°Π·ΠΎΠ²ΡΡ
ΡΠΎΡΠ΅ΠΊ). ΠΡΠ΅Π΄ΠΏΠΎΠ»Π°Π³Π°Π΅ΡΡΡ, ΡΡΠΎ Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ ΡΠ΅Π³ΠΈΡΡΡΠ°ΡΠΈΠΈ ΠΎΠ±ΡΠ΅ΠΊΡ ΡΠ΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡΡ Π² ΡΠ΅Π½ΡΡΠ°Π»ΡΠ½ΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΊΠ°Π΄ΡΠ°, ΠΏΠΎΡΡΠΎΠΌΡ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ ΠΌΠΎΠ³ΡΡ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΡΡ Π² Π²ΠΈΠ΄Π΅ ΠΏΡΠΎΠ΅ΠΊΡΠΈΠΉ Π½Π° ΠΎΡΠΈ ΡΠΈΡΡΠ΅ΠΌΡ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°Ρ, ΡΠ²ΡΠ·Π°Π½Π½ΠΎΠΉ Ρ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΎΡΡΡ ΠΊΠ°ΠΌΠ΅ΡΡ. ΠΠΎΠ²ΠΈΠ·Π½Π° ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π° ΡΠΎΡΡΠΎΠΈΡ Π² ΡΠΎΠΌ, ΡΡΠΎ Π½Π°Π±Π»ΡΠ΄Π°Π΅ΠΌΡΠ΅ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ (ΡΠ°ΡΡΡΠΎΡΠ½ΠΈΠ΅ Π²Π΄ΠΎΠ»Ρ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΎΡΠΈ ΠΈ ΡΠ³Π»ΠΎΠ²ΠΎΠ΅ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅) ΠΎΠ±ΡΠ΅ΠΊΡΠ° Π²ΡΡΠΈΡΠ»ΡΡΡΡΡ ΠΏΠΎ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°ΡΠ°ΠΌ Π·Π°Π΄Π°Π½Π½ΡΡ
ΡΠΎΡΠ΅ΠΊ Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡΡ
ΠΎΠ±ΡΠ΅ΠΊΡΠ°. ΠΠ»Ρ ΠΎΡΠ΅Π½ΠΊΠΈ ΡΠΎΡΡΠΎΡΠ½ΠΈΠΉ ΠΎΠ±ΡΠ΅ΠΊΡΠ° ΡΡΡΠΎΠΈΡΡΡ ΡΠΈΠ»ΡΡΡ ΠΠ°Π»ΠΌΠ°Π½Π°-ΠΡΡΡΠΈ Π² ΠΏΡΠ΅Π΄ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠΈ, ΡΡΠΎ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠ΅ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΎΠ±ΡΠ΅ΠΊΡΠ° ΠΎΠΏΠΈΡΡΠ²Π°Π΅ΡΡΡ ΡΠΎΠ²ΠΎΠΊΡΠΏΠ½ΠΎΡΡΡΡ ΡΡΠ°Π²Π½Π΅Π½ΠΈΠΉ ΠΏΠΎΡΡΡΠΏΠ°ΡΠ΅Π»ΡΠ½ΠΎΠ³ΠΎ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ ΡΠ΅Π½ΡΡΠ° ΠΌΠ°ΡΡ Π²Π΄ΠΎΠ»Ρ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΎΡΠΈ ΠΈ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ ΡΠ³Π»ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΠΎ ΠΏΠ»ΠΎΡΠΊΠΎΡΡΠΈ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ. ΠΡΠΈΠ²Π΅Π΄Π΅Π½ ΠΏΡΠΈΠΌΠ΅Ρ ΠΎΡΠ΅Π½ΠΈΠ²Π°Π½ΠΈΡ ΡΠ³Π»ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΎΠ±ΡΠ΅ΠΊΡΠ°, ΠΈΠ»Π»ΡΡΡΡΠΈΡΡΡΡΠΈΠΉ ΡΠ°Π±ΠΎΡΠΎΡΠΏΠΎΡΠΎΠ±Π½ΠΎΡΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠΎΠ΄Π°
Intelligent Interfaces to Empower People with Disabilities
Severe motion impairments can result from non-progressive disorders, such as cerebral palsy, or degenerative neurological diseases, such as Amyotrophic Lateral Sclerosis (ALS), Multiple Sclerosis (MS), or muscular dystrophy (MD). They can be due to traumatic brain injuries, for example, due to a traffic accident, or to brainste
A Study of Segmentation and Normalization for Iris Recognition Systems
Iris recognition systems capture an image from an individual's eye. The iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. For instance, even an effective feature extraction method would not be able to obtain useful information from an iris image that is not segmented or normalized properly. This thesis is to enhance the performance of segmentation and normalization processes in iris recognition systems to increase the overall accuracy. The previous iris segmentation approaches assume that the boundary of pupil is a circle. However, according to our observation, circle cannot model this boundary accurately. To improve the quality of segmentation, a novel active contour is proposed to detect the irregular boundary of pupil. The method can successfully detect all the pupil boundaries in the CASIA database and increase the recognition accuracy. Most previous normalization approaches employ polar coordinate system to transform iris. Transforming iris into polar coordinates requires a reference point as the polar origin. Since pupil and limbus are generally non-concentric, there are two natural choices, pupil center and limbus center. However, their performance differences have not been investigated so far. We also propose a reference point, which is the virtual center of a pupil with radius equal to zero. We refer this point as the linearly-guessed center. The experiments demonstrate that the linearly-guessed center provides much better recognition accuracy. In addition to evaluating the pupil and limbus centers and proposing a new reference point for normalization, we reformulate the normalization problem as a minimization problem. The advantage of this formulation is that it is not restricted by the circular assumption used in the reference point approaches. The experimental results demonstrate that the proposed method performs better than the reference point approaches. In addition, previous normalization approaches are based on transforming iris texture into a fixed-size rectangular block. In fact, the shape and size of normalized iris have not been investigated in details. In this thesis, we study the size parameter of traditional approaches and propose a dynamic normalization scheme, which transforms an iris based on radii of pupil and limbus. The experimental results demonstrate that the dynamic normalization scheme performs better than the previous approaches
Improved facial feature fitting for model based coding and animation
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Intensity based methodologies for facial expression recognition.
by Hok Chun Lo.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 136-143).Abstracts in English and Chinese.LIST OF FIGURES --- p.viiiLIST OF TABLES --- p.xChapter 1. --- INTRODUCTION --- p.1Chapter 2. --- PREVIOUS WORK ON FACIAL EXPRESSION RECOGNITION --- p.9Chapter 2.1. --- Active Deformable Contour --- p.9Chapter 2.2. --- Facial Feature Points and B-spline Curve --- p.10Chapter 2.3. --- Optical Flow Approach --- p.11Chapter 2.4. --- Facial Action Coding System --- p.12Chapter 2.5. --- Neural Network --- p.13Chapter 3. --- EIGEN-ANALYSIS BASED METHOD FOR FACIAL EXPRESSION RECOGNITION --- p.15Chapter 3.1. --- Related Topics on Eigen-Analysis Based Method --- p.15Chapter 3.1.1. --- Terminologies --- p.15Chapter 3.1.2. --- Principal Component Analysis --- p.17Chapter 3.1.3. --- Significance of Principal Component Analysis --- p.18Chapter 3.1.4. --- Graphical Presentation of the Idea of Principal Component Analysis --- p.20Chapter 3.2. --- EigenFace Method for Face Recognition --- p.21Chapter 3.3. --- Eigen-Analysis Based Method for Facial Expression Recognition --- p.23Chapter 3.3.1. --- Person-Dependent Database --- p.23Chapter 3.3.2. --- Direct Adoption of EigenFace Method --- p.24Chapter 3.3.3. --- Multiple Subspaces Method --- p.27Chapter 3.4. --- Detail Description on Our Approaches --- p.29Chapter 3.4.1. --- Database Formation --- p.29Chapter a. --- Conversion of Image to Column Vector --- p.29Chapter b. --- "Preprocess: Scale Regulation, Orientation Regulation and Cropping." --- p.30Chapter c. --- Scale Regulation --- p.31Chapter d. --- Orientation Regulation --- p.32Chapter e. --- Cropping of images --- p.33Chapter f. --- Calculation of Expression Subspace for Direct Adoption Method --- p.35Chapter g. --- Calculation of Expression Subspace for Multiple Subspaces Method. --- p.38Chapter 3.4.2. --- Recognition Process for Direct Adoption Method --- p.38Chapter 3.4.3. --- Recognition Process for Multiple Subspaces Method --- p.39Chapter a. --- Intensity Normalization Algorithm --- p.39Chapter b. --- Matching --- p.44Chapter 3.5. --- Experimental Result and Analysis --- p.45Chapter 4. --- DEFORMABLE TEMPLATE MATCHING SCHEME FOR FACIAL EXPRESSION RECOGNITION --- p.53Chapter 4.1. --- Background Knowledge --- p.53Chapter 4.1.1. --- Camera Model --- p.53Chapter a. --- Pinhole Camera Model and Perspective Projection --- p.54Chapter b. --- Orthographic Camera Model --- p.56Chapter c. --- Affine Camera Model --- p.57Chapter 4.1.2. --- View Synthesis --- p.58Chapter a. --- Technique Issue of View Synthesis --- p.59Chapter 4.2. --- View Synthesis Technique for Facial Expression Recognition --- p.68Chapter 4.2.1. --- From View Synthesis Technique to Template Deformation --- p.69Chapter 4.3. --- Database Formation --- p.71Chapter 4.3.1. --- Person-Dependent Database --- p.72Chapter 4.3.2. --- Model Images Acquisition --- p.72Chapter 4.3.3. --- Templates' Structure and Formation Process --- p.73Chapter 4.3.4. --- Selection of Warping Points and Template Anchor Points --- p.77Chapter a. --- Selection of Warping Points --- p.78Chapter b. --- Selection of Template Anchor Points --- p.80Chapter 4.4. --- Recognition Process --- p.81Chapter 4.4.1. --- Solving Warping Equation --- p.83Chapter 4.4.2. --- Template Deformation --- p.83Chapter 4.4.3. --- Template from Input Images --- p.86Chapter 4.4.4. --- Matching --- p.87Chapter 4.5. --- Implementation of Automation System --- p.88Chapter 4.5.1. --- Kalman Filter --- p.89Chapter 4.5.2. --- Using Kalman Filter for Trakcing in Our System --- p.89Chapter 4.5.3. --- Limitation --- p.92Chapter 4.6. --- Experimental Result and Analysis --- p.93Chapter 5. --- CONCLUSION AND FUTURE WORK --- p.97APPENDIX --- p.100Chapter I. --- Image Sample 1 --- p.100Chapter II. --- Image Sample 2 --- p.109Chapter III. --- Image Sample 3 --- p.119Chapter IV. --- Image Sample 4 --- p.135BIBLIOGRAPHY --- p.13