1,469 research outputs found

    Learning Feature Selection and Combination Strategies for Generic Salient Object Detection

    No full text
    For a diverse range of applications in machine vision from social media searches to robotic home care providers, it is important to replicate the mechanism by which the human brain selects the most important visual information, while suppressing the remaining non-usable information. Many computational methods attempt to model this process by following the traditional model of visual attention. The traditional model of attention involves feature extraction, conditioning and combination to capture this behaviour of human visual attention. Consequently, the model has inherent design choices at its various stages. These choices include selection of parameters related to the feature computation process, setting a conditioning approach, feature importance and setting a combination approach. Despite rapid research and substantial improvements in benchmark performance, the performance of many models depends upon tuning these design choices in an ad hoc fashion. Additionally, these design choices are heuristic in nature, thus resulting in good performance only in certain settings. Consequentially, many such models exhibit low robustness to difficult stimuli and the complexities of real-world imagery. Machine learning and optimisation technique have long been used to increase the generalisability of a system to unseen data. Surprisingly, artificial learning techniques have not been investigated to their full potential to improve generalisation of visual attention methods. The proposed thesis is that artificial learning can increase the generalisability of the traditional model of visual attention by effective selection and optimal combination of features. The following new techniques have been introduced at various stages of the traditional model of visual attention to improve its generalisation performance, specifically on challenging cases of saliency detection: 1. Joint optimisation of feature related parameters and feature importance weights is introduced for the first time to improve the generalisation of the traditional model of visual attention. To evaluate the joint learning hypothesis, a new method namely GAOVSM is introduced for the tasks of eye fixation prediction. By finding the relationships between feature related parameters and feature importance, the developed method improves the generalisation performance of baseline method (that employ human encoded parameters). 2. Spectral matting based figure-ground segregation is introduced to overcome the artifacts encountered by region-based salient object detection approaches. By suppressing the unwanted background information and assigning saliency to object parts in a uniform manner, the developed FGS approach overcomes the limitations of region based approaches. 3. Joint optimisation of feature computation parameters and feature importance weights is introduced for optimal combination of FGS with complementary features for the first time for salient object detection. By learning feature related parameters and their respective importance at multiple segmentation thresholds and by considering the performance gaps amongst features, the developed FGSopt method improves the object detection performance of the FGS technique also improving upon several state-of-the-art salient object detection models. 4. The introduction of multiple combination schemes/rules further extends the generalisability of the traditional attention model beyond that of joint optimisation based single rules. The introduction of feature composition based grouping of images, enables the developed IGA method to autonomously identify an appropriate combination strategy for an unseen image. The results of a pair-wise ranksum test confirm that the IGA method is significantly better than the deterministic and classification based benchmark methods on the 99% confidence interval level. Extending this line of research, a novel relative encoding approach enables the adapted XCSCA method to group images having similar saliency prediction ability. By keeping track of previous inputs, the introduced action part of the XCSCA approach enables learning of generalised feature importance rules. By more accurate grouping of images as compared with IGA, generalised learnt rules and appropriate application of feature importance rules, the XCSCA approach improves upon the generalisation performance of the IGA method. 5. The introduced uniform saliency assignment and segmentation quality cues enable label free evaluation of a feature/saliency map. By accurate ranking and effective clustering, the developed DFS method successfully solves the complex problem of finding appropriate features for combination (on an-image-by-image basis) for the first time in saliency detection. The DFS method enables ground truth free evaluation of saliency methods and advances the state-of-the-art in data driven saliency aggregation by detection and deselection of redundant information. The final contribution is that the developed methods are formed into a complete system where analysis shows the effects of their interactions on the system. Based on the saliency prediction accuracy versus computational time trade-off, specialised variants of the proposed methods are presented along with the recommendations for further use by other saliency detection systems. This research work has shown that artificial learning can increase the generalisation of the traditional model of attention by effective selection and optimal combination of features. Overall, this thesis has shown that it is the ability to autonomously segregate images based on their types and subsequent learning of appropriate combinations that aid generalisation on difficult unseen stimuli

    Contribution au traitement d informations visuelles complexes et à l extraction autonome des connaissances (application à la robotique autonome)

    Get PDF
    Le travail effectué lors de cette thèse concerne le développement d'un système cognitif artificiel autonome. La solution proposée repose sur l'hypothèse que la curiosité est une source de motivation d'un système cognitif dans le processus d'acquisition des nouvelles connaissances. En outre, deux types distincts de curiosité ont été identifiés conformément au système cognitif humain. Sur ce principe, une architecture cognitive à deux niveaux a été proposée. Le bas-niveau repose sur le principe de la saillance perceptive, tandis que le haut-niveau réalise l'acquisition des connaissances par l'observation et l'interaction avec l'environnement. Cette thèse apporte les contributions suivantes : A) Un état de l'art sur l'acquisition autonome de connaissance. B) L'étude, la conception et la réalisation d'un système cognitif bas-niveau basé sur le principe de la curiosité perceptive. L'approche proposée repose sur la saillance visuelle réalisée grâce au développement d'un algorithme rapide et robuste permettant la détection et l'apprentissage d'objets saillants. C) La conception d'un système cognitif haut-niveau, basé sur une approche générique, permettant l'acquisition de connaissance à partir de l'observation et de l'interaction avec son environnent (y compris avec les êtres humains). Basé sur la curiosité épistémique, le système cognitif haut-niveau développé permet à une machine (par exemple un robot) de devenir l'acteur de son propre apprentissage. Une conséquence substantielle d'un tel système est la possibilité de conférer des capacités cognitives haut-niveau multimodales à des robots pour accroître leur autonomie dans un environnement réel (environnement humain). D) La mise en œuvre de la stratégie proposée dans le cadre de la robotique autonome. Les études et les validations expérimentales réalisées ont notamment confirmé que notre approche permet d'accroître l'autonomie des robots dans un environnement réelThe work accomplished in this thesis concerns development of an autonomous machine cognition system. The proposed solution reposes on the assumption that it is the curiosity which motivates a cognitive system to acquire new knowledge. Further, two distinct kinds of curiosity are identified in conformity to human cognitive system. On this I build a two level cognitive architecture. I identify its lower level with the perceptual saliency mechanism, while the higher level performs knowledge acquisition from observation and interaction with the environment. This thesis brings the following contribution: A) Investigation of the state of the art in autonomous knowledge acquisition. B) Realization of a lower cognitive level in the ensemble of the mentioned system, which is realizing the perceptual curiosity mechanism through a novel fast, real-world robust algorithm for salient object detection and learning. C) Realization of a higher cognitive level through a general framework for knowledge acquisition from observation and interaction with the environment including humans. Based on the epistemic curiosity, the high-level cognitive system enables a machine (e.g. a robot) to be itself the actor of its learning. An important consequence of this system is the possibility to confer high level multimodal cognitive capabilities to robots to increase their autonomy in real-world environment (human environment). D) Realization of the strategy proposed in the context of autonomous robotics. The studies and experimental validations done had confirmed notably that our approach allows increasing the autonomy of robots in real-world environmentPARIS-EST-Université (770839901) / SudocSudocFranceF

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions

    Human Metaphase Chromosome Analysis using Image Processing

    Get PDF
    Development of an effective human metaphase chromosome analysis algorithm can optimize expert time usage by increasing the efficiency of many clinical diagnosis processes. Although many methods exist in the literature, they are only applicable for limited morphological variations and are specific to the staining method used during cell preparation. They are also highly influenced by irregular chromosome boundaries as well as the presence of artifacts such as premature sister chromatid separation. Therefore an algorithm is proposed in this research which can operate with any morphological variation of the chromosome across images from multiple staining methods. The proposed algorithm is capable of calculating the segmentation outline, the centerline (which gives the chromosome length), partitioning of the telomere regions and the centromere location of a given chromosome. The algorithm also detects and corrects for the sister chromatid separation artifact in metaphase cell images. A metric termed the Candidate Based Centromere Confidence (CBCC) is proposed to accompany each centromere detection result of the proposed method, giving an indication of the confidence the algorithm has on a given localization. The proposed method was first tested for the ability of calculating an accurate width profile against a centerline based method [1] using 226 chromosomes. A statistical analysis of the centromere detection error values proved that the proposed method can accurately locate centromere locations with statistical significance. Furthermore, the proposed method performed more consistently across different staining methods in comparison to the centerline based approach. When tested with a larger data set of 1400 chromosomes collected from a set of DAPI (4\u27,6-diamidino-2-phenylindole) and Giemsa stained cell images, the proposed candidate based centromere detection algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%

    Final report key contents: main results accomplished by the EU-Funded project IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots

    Get PDF
    This document has the goal of presenting the main scientific and technological achievements of the project IM-CLeVeR. The document is organised as follows: 1. Project executive summary: a brief overview of the project vision, objectives and keywords. 2. Beneficiaries of the project and contacts: list of Teams (partners) of the project, Team Leaders and contacts. 3. Project context and objectives: the vision of the project and its overall objectives 4. Overview of work performed and main results achieved: a one page overview of the main results of the project 5. Overview of main results per partner: a bullet-point list of main results per partners 6. Main achievements in detail, per partner: a throughout explanation of the main results per partner (but including collaboration work), with also reference to the main publications supporting them

    Third Conference on Artificial Intelligence for Space Applications, part 2

    Get PDF
    Topics relative to the application of artificial intelligence to space operations are discussed. New technologies for space station automation, design data capture, computer vision, neural nets, automatic programming, and real time applications are discussed

    Gestures in human-robot interaction

    Get PDF
    Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented
    corecore