2,213 research outputs found

    Image Segmentation with Eigenfunctions of an Anisotropic Diffusion Operator

    Full text link
    We propose the eigenvalue problem of an anisotropic diffusion operator for image segmentation. The diffusion matrix is defined based on the input image. The eigenfunctions and the projection of the input image in some eigenspace capture key features of the input image. An important property of the model is that for many input images, the first few eigenfunctions are close to being piecewise constant, which makes them useful as the basis for a variety of applications such as image segmentation and edge detection. The eigenvalue problem is shown to be related to the algebraic eigenvalue problems resulting from several commonly used discrete spectral clustering models. The relation provides a better understanding and helps developing more efficient numerical implementation and rigorous numerical analysis for discrete spectral segmentation methods. The new continuous model is also different from energy-minimization methods such as geodesic active contour in that no initial guess is required for in the current model. The multi-scale feature is a natural consequence of the anisotropic diffusion operator so there is no need to solve the eigenvalue problem at multiple levels. A numerical implementation based on a finite element method with an anisotropic mesh adaptation strategy is presented. It is shown that the numerical scheme gives much more accurate results on eigenfunctions than uniform meshes. Several interesting features of the model are examined in numerical examples and possible applications are discussed

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    An interactive tool for semi-automated leaf annotation

    Get PDF
    High throughput plant phenotyping is emerging as a necessary step towards meeting agricultural demands of the future. Central to its success is the development of robust computer vision algorithms that analyze images and extract phenotyping information to be associated with genotypes and environmental conditions for identifying traits suitable for further development. Obtaining leaf level quantitative data is important towards understanding better this interaction. While certain efforts have been made to obtain such information in an automated fashion, further innovations are necessary. In this paper we present an annotation tool that can be used to semi-automatically segment leaves in images of rosette plants. This tool, which is designed to exist in a stand-alone fashion but also in cloud based environments, can be used to annotate data directly for the study of plant and leaf growth or to provide annotated datasets for learning-based approaches to extracting phenotypes from images. It relies on an interactive graph-based segmentation algorithm to propagate expert provided priors (in the form of pixels) to the rest of the image, using the random walk formulation to find a good per-leaf segmentation. To evaluate the tool we use standardized datasets available from the LSC and LCC 2015 challenges, achieving an average leaf segmentation accuracy of almost 97% using scribbles as annotations. The tool and source code are publicly available at http://www.phenotiki.com and as a GitHub repository at https://github.com/phenotiki/LeafAnnotationTool
    • …
    corecore