631 research outputs found

    Direct identification of breast cancer pathologies using blind separation of label-free localized reflectance measurements

    Get PDF
    Breast tumors are blindly identified using Principal (PCA) and Independent Component Analysis (ICA) of localized reflectance measurements. No assumption of a particular theoretical model for the reflectance needs to be made, while the resulting features are proven to have discriminative power of breast pathologies. Normal, benign and malignant breast tissue types in lumpectomy specimens were imaged ex vivo and a surgeon-guided calibration of the system is proposed to overcome the limitations of the blind analysis. A simple, fast and linear classifier has been proposed where no training information is required for the diagnosis. A set of 29 breast tissue specimens have been diagnosed with a sensitivity of 96% and specificity of 95% when discriminating benign from malignant pathologies. The proposed hybrid combination PCA-ICA enhanced diagnostic discrimination, providing tumor probability maps, and intermediate PCA parameters reflected tissue optical properties.This work has been supported by the Spanish Government through CYCIT projects DA2TOI (FIS2010-19860), TFS (TEC2010-20224-C02-02) and Alma’s Eguizabal PhD Grant (FPU12/04130) and by Dartmouth College

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Computer mediated colour fidelity and communication

    Get PDF
    Developments in technology have meant that computercontrolled imaging devices are becoming more powerful and more affordable. Despite their increasing prevalence, computer-aided design and desktop publishing software has failed to keep pace, leading to disappointing colour reproduction across different devices. Although there has been a recent drive to incorporate colour management functionality into modern computer systems, in general this is limited in scope and fails to properly consider the way in which colours are perceived. Furthermore, differences in viewing conditions or representation severely impede the communication of colour between groups of users. The approach proposed here is to provide WYSIWYG colour across a range of imaging devices through a combination of existing device characterisation and colour appearance modeling techniques. In addition, to further facilitate colour communication, various common colour notation systems are defined by a series of mathematical mappings. This enables both the implementation of computer-based colour atlases (which have a number of practical advantages over physical specifiers) and also the interrelation of colour represented in hitherto incompatible notations. Together with the proposed solution, details are given of a computer system which has been implemented. The system was used by textile designers for a real task. Prior to undertaking this work, designers were interviewed in order to ascertain where colour played an important role in their work and where it was found to be a problem. A summary of the findings of these interviews together with a survey of existing approaches to the problems of colour fidelity and communication in colour computer systems are also given. As background to this work, the topics of colour science and colour imaging are introduced

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks

    Remote Sensing for Non‐Technical Survey

    Get PDF
    This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed

    Retinal Fundus Image Analysis for Diagnosis of Glaucoma: A Comprehensive Survey

    Full text link
    © 2016 IEEE. The rapid development of digital imaging and computer vision has increased the potential of using the image processing technologies in ophthalmology. Image processing systems are used in standard clinical practices with the development of medical diagnostic systems. The retinal images provide vital information about the health of the sensory part of the visual system. Retinal diseases, such as glaucoma, diabetic retinopathy, age-related macular degeneration, Stargardt's disease, and retinopathy of prematurity, can lead to blindness manifest as artifacts in the retinal image. An automated system can be used for offering standardized large-scale screening at a lower cost, which may reduce human errors, provide services to remote areas, as well as free from observer bias and fatigue. Treatment for retinal diseases is available; the challenge lies in finding a cost-effective approach with high sensitivity and specificity that can be applied to large populations in a timely manner to identify those who are at risk at the early stages of the disease. The progress of the glaucoma disease is very often quiet in the early stages. The number of people affected has been increasing and patients are seldom aware of the disease, which can cause delay in the treatment. A review of how computer-aided approaches may be applied in the diagnosis and staging of glaucoma is discussed here. The current status of the computer technology is reviewed, covering localization and segmentation of the optic nerve head, pixel level glaucomatic changes, diagonosis using 3-D data sets, and artificial neural networks for detecting the progression of the glaucoma disease

    Processing of image sequences from fundus camera

    Get PDF
    Cílem mé diplomové práce bylo navrhnout metodu analýzy retinálních sekvencí, která bude hodnotit kvalitu jednotlivých snímků. V teoretické části se také zabývám vlastnostmi retinálních sekvencí a způsobem registrace snímků z fundus kamery. V praktické části je implementována metoda hodnocení kvality snímků, která je otestována na reálných retinálních sekvencích a vyhodnocena její úspěšnost. Práce hodnotí i vliv této metody na registraci retinálních snímků.The aim of my master's thesis was to propose a method of retinal sequence analysis which will evaluate the quality of each frame. In the theoretical part, I will also deal with the properties of retinal sequences and the way of registering the images of the fundus camera. In the practical part the method of evaluating image quality is implemented. This algorithm is tested on real retinal sequences and its success is assessed. This work also evaluates the impact of proposed method on the registration of retinal images.
    corecore