4 research outputs found

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies

    An automatic system for classification of breast cancer lesions in ultrasound images

    Get PDF
    Breast cancer is the most common of all cancers and second most deadly cancer in women in the developed countries. Mammography and ultrasound imaging are the standard techniques used in cancer screening. Mammography is widely used as the primary tool for cancer screening, however it is invasive technique due to radiation used. Ultrasound seems to be good at picking up many cancers missed by mammography. In addition, ultrasound is non-invasive as no radiation is used, portable and versatile. However, ultrasound images have usually poor quality because of multiplicative speckle noise that results in artifacts. Because of noise segmentation of suspected areas in ultrasound images is a challenging task that remains an open problem despite many years of research. In this research, a new method for automatic detection of suspected breast cancer lesions using ultrasound is proposed. In this fully automated method, new de-noising and segmentation techniques are introduced and high accuracy classifier using combination of morphological and textural features is used. We use a combination of fuzzy logic and compounding to denoise ultrasound images and reduce shadows. We introduced a new method to identify the seed points and then use region growing method to perform segmentation. For preliminary classification we use three classifiers (ANN, AdaBoost, FSVM) and then we use a majority voting to get the final result. We demonstrate that our automated system performs better than the other state-of-the-art systems. On our database containing ultrasound images for 80 patients we reached accuracy of 98.75% versus ABUS method with 88.75% accuracy and Hybrid Filtering method with 92.50% accuracy. Future work would involve a larger dataset of ultrasound images and we will extend our system to handle colour ultrasound images. We will also study the impact of larger number of texture and morphological features as well as weighting scheme on performance of our classifier. We will also develop an automated method to identify the "wall thickness" of a mass in breast ultrasound images. Presently the wall thickness is extracted manually with the help of a physician

    Feature learning for recognition with Bayesian networks

    No full text
    Many realistic visual recognition tasks are "open" in the sense that the number and nature of the categories to be learned are not initially known, and there is no closed set of training images available to the system. We argue that open recognition tasks require incremental learning methods, and feature sets that are capable of expressing distinctions at any level of specificity or generality. We describe progress toward such a system that is based on an infinite combinatorial feature space. Feature primitives can be composed into increasingly complex and specific compound features. Distinctive features are learned incrementally, and are incorporated into dynamically updated Bayesian network classifiers. Experimental results illustrate the applicability and potential of our approach. 1. Introduction Proc. Fifteenth International Conference on Pattern Recognition (ICPR 2000), 3-8 September 2000, Barcelona, Spain. Copyright c IEEE. During the past decade, considerable progress has..
    corecore