5 research outputs found

    The research for shape-based visual recognition of object categories

    Get PDF
    摘要 视觉目标类识别旨在识别图像中特定的某类目标,基于形状的目标类识别是目前计算机视觉研究的热点之一。真实图像中物体姿态的多样性以及环境的复杂性,给目标的形状提取和识别带来巨大挑战。本文借鉴生物视觉机制的研究成果,对基于形状的目标类识别算法进行研究。主要研究内容如下: 1. 研究与形状认知相关的视觉机制,分析形状知觉整体性的生理基础及其生理模型。以形状知觉整体性为基础,建立基于形状的目标类识别系统框架。框架既重视整体性在自下而上的特征加工中的作用,也重视整体约束在自上而下的识别中的作用。 2. 受生物视觉上的整合野模型启发,本文提出了一个三阶段轮廓检测算法。阶段1利用结构自适应滤波器平滑...Categorical object detection addresses determining the number of instances of a particular object category in an image, and localizing those instances in space and scale. The shape-based visual recognition of object categories is one of hot topics in computer vision. The diversity of poses of targets and complexity of the environment in real images bring huge challenges to shape extraction and obj...学位:工学博士院系专业:信息科学与技术学院自动化系_控制理论与控制工程学号:2322006015337

    Efficiently Combining Contour and Texture Cues for Object Recognition

    No full text
    This paper proposes an efficient fusion of contour and texture cues for image categorization and object detection. Our work confirms and strengthens recent results that combining complementary feature types improves performance. We obtain a similar improvement in accuracy and additionally an improvement in efficiency. We use a boosting algorithm to learn models that use contour and texture features. Our main contributions are (i) the use of dense generic texture features to complement contour fragments, and (ii) a simple feature selection mechanism that includes the computational costs of features in order to learn a run-time efficient model. Our evaluation on 17 challenging and varied object classes confirms that the synergy of the two feature types performs significantly better than either alone, and that computational efficiency is substantially improved using our feature selection mechanism. An investigation of the boosted features shows a fascinating emergent property: the absence of certain textures often contributes towards object detection. Comparison with recent work shows that performance is state of the art.

    Characterizing Objects in Images using Human Context

    Get PDF
    Humans have an unmatched capability of interpreting detailed information about existent objects by just looking at an image. Particularly, they can effortlessly perform the following tasks: 1) Localizing various objects in the image and 2) Assigning functionalities to the parts of localized objects. This dissertation addresses the problem of aiding vision systems accomplish these two goals. The first part of the dissertation concerns object detection in a Hough-based framework. To this end, the independence assumption between features is addressed by grouping them in a local neighborhood. We study the complementary nature of individual and grouped features and combine them to achieve improved performance. Further, we consider the challenging case of detecting small and medium sized household objects under human-object interactions. We first evaluate appearance based star and tree models. While the tree model is slightly better, appearance based methods continue to suffer due to deficiencies caused by human interactions. To this end, we successfully incorporate automatically extracted human pose as a form of context for object detection. The second part of the dissertation addresses the tedious process of manually annotating objects to train fully supervised detectors. We observe that videos of human-object interactions with activity labels can serve as weakly annotated examples of household objects. Since such objects cannot be localized only through appearance or motion, we propose a framework that includes human centric functionality to retrieve the common object. Designed to maximize data utility by detecting multiple instances of an object per video, the framework achieves performance comparable to its fully supervised counterpart. The final part of the dissertation concerns localizing functional regions or affordances within objects by casting the problem as that of semantic image segmentation. To this end, we introduce a dataset involving human-object interactions with strong i.e. pixel level and weak i.e. clickpoint and image level affordance annotations. We propose a framework that utilizes both forms of weak labels and demonstrate that efforts for weak annotation can be further optimized using human context

    Human pose and action recognition

    Get PDF
    This thesis focuses on detection of persons and pose recognition using neural networks. The goal is to detect human body poses in a visual scene with multiple persons and to use this information in order to recognize human activity. This is achieved by rst detecting persons in a scene and then by estimating their body joints in order to infer articulated poses. The work developed in this thesis explored neural networks and deep learning methods. Deep learning allows to employ computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have greatly improved the state-of-the-art in many domains such as speech recognition and visual object detection and classi cation. Deep learning discovers intricate structure in data by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation provided by the previous one. Person detection, in general, is a di cult task due to a large variability of representation due to di erent factors such as scales, views and occlusion. An object detection framework based on multi-stage convolutional features for pedestrian detection is proposed in this thesis. This framework extends the Fast R-CNN framework for the combination of several convolutional features from di erent stages of a CNN (Convolutional Neural Network) to improve the detector's accuracy. This provides high quality detections of persons in a visual scene, which are then used as input in conjunction with a human pose estimation model in order to estimate human body joint locations of multiple persons in an image. Human pose estimation is done by a deep convolutional neural network composed of a series of residual auto-encoders. These produce multiple predictions which are later combined to provide a heatmap prediction of human body joints. In this network topology, features are processed across all scales capturing the various spatial relationships associated with the body. Repeated bottom-up and top-down processing with intermediate supervision for each auto-encoder network is applied. This results in very accurate 2D heatmaps of body joint predictions. The methods presented in this thesis were benchmarked against other topperforming methods on popular datasets for human pedestrian and pose estimation, achieving good results compared with other state-of-the-art algorithms.Esta tese foca a detec c~ao de pessoas e o reconhecimento de poses usando redes neuronais. O objectivo e detectar poses humanas num ambiente (cena) com m ultiplas pessoas e usar essa informa c~ao para reconhecer actividade humana. Isto e alcan cado ao detectar, em primeiro lugar, pessoas numa cena e, seguidamente, estimar as suas juntas corporais de modo a inferir poses articuladas. O trabalho desenvolvido nesta tese explorou m etodos de redes neuronais e de aprendizagem profunda. A aprendizagem profunda permite que modelos computacionais compostos por m ultiplas camadas de processamento aprendam representa c~oes de dados com m ultiplos n veis de abstra c~ao. Estes m etodos t^em drasticamente melhorado o estado-da-arte em muitos dom nios como o reconhecimento de fala e a classi ca c~ao e o reconhecimento de objectos visuais. A aprendizagem profunda descobre estruturas intr nsecas em conjuntos de dados ao usar algoritmos de propaga c~ao inversa (backpropagation) para indicar como uma m aquina deve alterar os seus par^ametros internos que, por sua vez, s~ao usados para processar a representa c~ao em cada camada a partir da representa c~ao da camada anterior. A detec c~ao de pessoas em geral e uma tarefa dif cil dado a grande variabilidade de representa c~oes devido a diferentes escalas, vistas e oclus~oes. Uma estrutura de detec c~ao de objectos baseada em caracter sticas convolucionais de m ultiplos est agios para a detec c~ao de pedestres e proposta nesta tese. Esta estrutura estende a estrutura Fast R-CNN com a combina c~ao de v arias caracter sticas convolucionais de diferentes est agios da CNN (Convolutional Neural Network) usada de modo a melhorar a precis~ao do detector. Isto proporciona detec c~oes de pessoas com elevada abilidade numa cena, que s~ao posteriormente conjuntamente usadas como entrada no modelo de estima c~ao de poses humanas de modo a estimar a localiza c~ao de articula c~oes humanas para a detec c~ao de m ultiplas pessoas numa imagem. A estima c~ao de poses humanas e obtido atrav es de redes neuronais convolucionais profundas que s~ao compostas por uma s erie de auto-codi cadores residuais que fornecem m ultiplas previs~oes que s~ao, posteriormente, combinadas para fornecer um \mapa de calor" de articula c~oes corporais. Nesta topologia de rede, as caracter sticas da imagem s~ao processadas ao longo de v arias escalas, capturando as v arias rela c~oes espaciais associadas com o corpo humano. Repetidos processos de baixo-para-cima e de cima-para-baixo com supervis~ao interm edia para cada autocodi cador s~ao aplicados. Isto resulta em mapas de calor 2D muito precisos de estima c~oes de articula c~oes corporais de pessoas. Os m etodos apresentados nesta tese foram comparados com outros m etodos de alto desempenho em bases de dados de detec c~ao de pessoas e de reconhecimento de poses humanas, alcan cando muito bons resultados comparando com outros algoritmos do estado-da-arte
    corecore