29 research outputs found

    Carotenoid intake during early life mediates ontogenetic colour shifts and dynamic colour change during adulthood

    No full text
    Carotenoids play an import role as one of the most prevalent pigments in animals. Carotenoid-based colourations account for striking sexually and naturally selected colour adaptations. Several anurans (frogs and toads) change body coloration either slowly and permanently between life stages (ontogenetic colour change), or rapidly and temporarily (dynamic colour change) within minutes or hours. We investigated ontogenetic colour change from orange to green morphs and tested the influence of dietary carotenoids in the Wallace’s flying frog, Rhacophorus nigropalmatus. Conspicuous orange-red colouration in post-metamorphic development and formation of whitish dorsal spots only present in early life stages suggest juveniles imitate bird droppings. At the age of nine months, while all individuals still possessed orange-red body colouration, a 20-week long feeding experiment was performed supplying frogs with either no carotenoid supplementations, or dietary carotenoids once or four times per week. A high carotenoid diet resulted in a faster increase in green colour saturation and high levels of green and yellow chroma of back colouration. Less or no carotenoid supplementation led to an increase in blue chroma, contributing to a dull turquoise appearance often observed in captive bred and raised anurans. Dietary carotenoid availability in early life stages affected adaptive dynamic colour change when exposed to a mild stressor. Our results show that a high carotenoid diet influenced the ability to rapidly and reversibly change body colouration, an adaptation absent in frogs receiving no carotenoids. Dynamic colour changes were likewise performed in response to changing light conditions presumably camouflaging individuals and providing protection from UV irradiation. The ontogenetic and dynamic pigmentation changes are discussed in light of mechanism and function to promote defensive strategies at different life stages and environments to avoid predation.Description of datasetThe four datasets contain following colour parameters of each subadult/adult individual for each measurement: total brightness, maximal reflectance, maximal slope, blue chroma, green chroma, UV chroma and maximum average yellow chroma. Dataset "1_OCC" contains the measurements of the first nine months. In the datasets "2_FeedingExperiment" and "3_HandlingStimulus", the different conditions (G1, G2 and G3) indicate the different experimental groups of the feeding experiment (see methods). The dataset "3_HandlingStimulus" contains the measurements of 1) baseline, 2) after handling and 3) resting period of each individual. The dataset "4_LightConditions" contains the measurements of adults and subadults of group G3 of both set ups (set up 1: conditions shade/lit, set up 2: conditions leaf/exposed, see methods).THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Humanoid TeenSize Open Platform NimbRo-OP

    No full text

    Place recognition using surface entropy features

    No full text
    In this paper, we present an interest point detector and descriptor for 3D point clouds and depth images, coined SURE, and use it for recognizing semantically distinct places in indoor environments. We propose an interest operator that selects distinctive points on surfaces by measuring the variation in surface orientation based on surface normals in the local vicinity of a point. Furthermore, we design a view-poseinvariant descriptor that captures local surface properties and incorporates colored texture information. In experiments, we compare our approach to a state-of-the-art feature detector in depth images (NARF). Our descriptor achieves superior results for matching interest points between images and also requires lower computation time. Finally, we evaluate the use of SURE features for recognizing places

    Learning common and specific features for RGB-D semantic segmentation with deconvolutional networks

    Full text link
    © Springer International Publishing AG 2016. In this paper, we tackle the problem of RGB-D semantic segmentation of indoor images. We take advantage of deconvolutional networks which can predict pixel-wise class labels, and develop a new structure for deconvolution of multiple modalities. We propose a novel feature transformation network to bridge the convolutional networks and deconvolutional networks. In the feature transformation network, we correlate the two modalities by discovering common features between them, as well as characterize each modality by discovering modality specific features. With the common features, we not only closely correlate the two modalities, but also allow them to borrow features from each other to enhance the representation of shared information. With specific features, we capture the visual patterns that are only visible in one modality. The proposed network achieves competitive segmentation accuracy on NYU depth dataset V1 and V2

    Semantic Localization of a Robot in a Real Home

    No full text
    In social robotics, it is important that a mobile robot knows where it is because it provides a starting point for other activities such as moving from one room to another. As a contribution to solving this problem in the field of the semantic location of the mobile robot, we pro- pose to implement a methodology of recognition and scene learning in a real domestic environment. For this purpose, we used images from five different residences to create a dataset with which the base model was trained. The effectiveness of the implemented base model is evaluated in different scenarios. When the accuracy of the site identification decreases, the user provides feedback to the robot so that it can process the information collected from the new environment and re-identify the current location. The results obtained reinforce the need to acquire more knowledge when the environment is not recognizable by the pre-trained model.In social robotics, it is important that a mobile robot knows where it is because it provides a starting point for other activities such as moving from one room to another. As a contribution to solving this problem in the field of the semantic location of the mobile robot, we pro- pose to implement a methodology of recognition and scene learning in a real domestic environment. For this purpose, we used images from five different residences to create a dataset with which the base model was trained. The effectiveness of the implemented base model is evaluated in different scenarios. When the accuracy of the site identification decreases, the user provides feedback to the robot so that it can process the information collected from the new environment and re-identify the current location. The results obtained reinforce the need to acquire more knowledge when the environment is not recognizable by the pre-trained model

    Active Recognition and Manipulation for Mobile Robot Bin Picking

    No full text
    Abstract. Grasping individual objects from an unordered pile in a box has been investigated in stationary scenarios so far. In this work, we present a complete system including active object perception and grasp planning for bin picking with a mobile robot. At the core of our approach is an efficient representation of objects as compounds of simple shape and contour primitives. This representation is used for both robust object perception and efficient grasp planning. For being able to manipulate previously unknown objects, we learn object models from single scans in an offline phase. During operation, objects are detected in the scene using a particularly robust probabilistic graph matching. To cope with severe occlusions we employ active perception considering not only previously unseen volume but also outcomes of primitive and object detection. The combination of shape and contour primitives makes our object perception approach particularly robust even in the presence of noise, occlusions, and missing information. For grasp planning, we efficiently pre-compute possible grasps directly on the learned object models. During operation, grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task
    corecore