1,645 research outputs found

    Multisensory System for Fruit Harvesting Robots. Experimental Testing in Natural Scenarios and with Different Kinds of Crops

    Get PDF
    The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system.The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system

    Specificity and coherence of body representations

    Get PDF
    Bodily illusions differently affect body representations underlying perception and action. We investigated whether this task dependence reflects two distinct dimensions of embodiment: the sense of agency and the sense of the body as a coherent whole. In experiment 1 the sense of agency was manipulated by comparing active versus passive movements during the induction phase in a video rubber hand illusion (vRHI) setup. After induction, proprioceptive biases were measured both by perceptual judgments of hand position, as well as by measuring end-point accuracy of subjects' active pointing movements to an external object with the affected hand. The results showed, first, that the vRHI is largely perceptual: passive perceptual localisation judgments were altered, but end-point accuracy of active pointing responses with the affected hand to an external object was unaffected. Second, within the perceptual judgments, there was a novel congruence effect, such that perceptual biases were larger following passive induction of vRHI than following active induction. There was a trend for the converse effect for pointing responses, with larger pointing bias following active induction. In experiment 2, we used the traditional RHI to investigate the coherence of body representation by synchronous stimulation of either matching or mismatching fingers on the rubber hand and the participant's own hand. Stimulation of matching fingers induced a local proprioceptive bias for only the stimulated finger, but did not affect the perceived shape of the hand as a whole. In contrast, stimulation of spatially mismatching fingers eliminated the RHI entirely. The present results show that (i) the sense of agency during illusion induction has specific effects, depending on whether we represent our body for perception or to guide action, and (ii) representations of specific body parts can be altered without affecting perception of the spatial configuration of the body as a whole

    Technologies for safe and resilient earthmoving operations: A systematic literature review

    Get PDF
    Resilience engineering relates to the ability of a system to anticipate, prepare, and respond to predicted and unpredicted disruptions. It necessitates the use of monitoring and object detection technologies to ensure system safety in excavation systems. Given the increased investment and speed of improvement in technologies, it is necessary to review the types of technology available and how they contribute to excavation system safety. A systematic literature review was conducted which identified and classified the existing monitoring and object detection technologies, and introduced essential enablers for reliable and effective monitoring and object detection systems including: 1) the application of multisensory and data fusion approaches, and 2) system-level application of technologies. This study also identified the developed functionalities for accident anticipation, prevention and response to safety hazards during excavation, as well as those that facilitate learning in the system. The existing research gaps and future direction of research have been discussed

    Método para el registro automático de imágenes basado en transformaciones proyectivas planas dependientes de las distancias y orientado a imágenes sin características comunes

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 18-12-2015Multisensory data fusion oriented to image-based application improves the accuracy, quality and availability of the data, and consequently, the performance of robotic systems, by means of combining the information of a scene acquired from multiple and different sources into a unified representation of the 3D world scene, which is more enlightening and enriching for the subsequent image processing, improving either the reliability by using the redundant information, or the capability by taking advantage of complementary information. Image registration is one of the most relevant steps in image fusion techniques. This procedure aims the geometrical alignment of two or more images. Normally, this process relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. For instance, in the combination of ToF and RGB cameras, the robust feature-matching is not reliable. Typically, the fusion of these two sensors has been addressed from the computation of the cameras calibration parameters for coordinate transformation between them. As a result, a low resolution colour depth map is provided. For improving the resolution of these maps and reducing the loss of colour information, extrapolation techniques are adopted. A crucial issue for computing high quality and accurate dense maps is the presence of noise in the depth measurement from the ToF camera, which is normally reduced by means of sensor calibration and filtering techniques. However, the filtering methods, implemented for the data extrapolation and denoising, usually over-smooth the data, reducing consequently the accuracy of the registration procedure...La fusión multisensorial orientada a aplicaciones de procesamiento de imágenes, conocida como fusión de imágenes, es una técnica que permite mejorar la exactitud, la calidad y la disponibilidad de datos de un entorno tridimensional, que a su vez permite mejorar el rendimiento y la operatividad de sistemas robóticos. Dicha fusión, se consigue mediante la combinación de la información adquirida por múltiples y diversas fuentes de captura de datos, la cual se agrupa del tal forma que se obtiene una mejor representación del entorno 3D, que es mucho más ilustrativa y enriquecedora para la implementación de métodos de procesamiento de imágenes. Con ello se consigue una mejora en la fiabilidad y capacidad del sistema, empleando la información redundante que ha sido adquirida por múltiples sensores. El registro de imágenes es uno de los procedimientos más importantes que componen la fusión de imágenes. El objetivo principal del registro de imágenes es la consecución de la alineación geométrica entre dos o más imágenes. Normalmente, este proceso depende de técnicas de búsqueda de patrones comunes entre imágenes, lo cual puede ser un inconveniente cuando se combinan sensores que no proporcionan datos con características similares. Un ejemplo de ello, es la fusión de cámaras de color de alta resolución (RGB) con cámaras de Tiempo de Vuelo de baja resolución (Time-of-Flight (ToF)), con las cuales no es posible conseguir una detección robusta de patrones comunes entre las imágenes capturadas por ambos sensores. Por lo general, la fusión entre estas cámaras se realiza mediante el cálculo de los parámetros de calibración de las mismas, que permiten realizar la trasformación homogénea entre ellas. Y como resultado de este xii Abstract procedimiento, se obtienen mapas de profundad y de color de baja resolución. Con el objetivo de mejorar la resolución de estos mapas y de evitar la pérdida de información de color, se utilizan diversas técnicas de extrapolación de datos. Un factor crucial a tomar en cuenta para la obtención de mapas de alta calidad y alta exactitud, es la presencia de ruido en las medidas de profundidad obtenidas por las cámaras ToF. Este problema, normalmente se reduce mediante la calibración de estos sensores y con técnicas de filtrado de datos. Sin embargo, las técnicas de filtrado utilizadas, tanto para la interpolación de datos, como para la reducción del ruido, suelen producir el sobre-alisamiento de los datos originales, lo cual reduce la exactitud del registro de imágenes...Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    Echolocation, as a method supporting spatial orientation and independent movement of people with visual impairment

    Get PDF
    Kamila Miler-Zdanowska, Echolocation, as a method supporting spatial orientation and independent movement of people with visual impairment. Interdisciplinary Contexts of Special Pedagogy, no. 25, Poznań 2019. Pp. 353-371. Adam MickiewiczUniversity Press. ISSN 2300-391X. DOI: https://doi.org/10.14746/ikps.2019.25.15 People with visual impairment use information from other senses to gain knowledge about the world around them. More and more studies conducted withthe participation of visually impaired people indicate that data obtained through auditory perception is extremely important. In this context, the ability of echolocation used by blind people to move independently is interesting. The aim of the article is to present echolocation as a method supporting spatial orientation of people with visual impairment. The article presents the results of empirical studies of echolocation. It also presents the benefits of using this ability in everyday life and signals research projects related to the methodology of teaching echolocation in Poland. People with visually impaired to get knowledge about the world around them use information from other senses. Many studies conducted with the participation of visually impaired people indicate that data obtained through hearing are extremely important. In this context, the ability of echolocation used by blind people to move independently is interesting. The aim of the article is to present echolocation as a method supporting spatial orientation of people with visual disabilities. The article presents the results of empirical studies on echolocation. It also presents the benefits of using this skill in everyday life and signals research projects on themethodology of teaching echolocation in Poland.Kamila Miler-Zdanowska, Echolocation, as a method supporting spatial orientation and independent movement of people with visual impairment. Interdisciplinary Contexts of Special Pedagogy, no. 25, Poznań 2019. Pp. 353-371. Adam MickiewiczUniversity Press. ISSN 2300-391X. DOI: https://doi.org/10.14746/ikps.2019.25.15 People with visual impairment use information from other senses to gain knowledge about the world around them. More and more studies conducted withthe participation of visually impaired people indicate that data obtained through auditory perception is extremely important. In this context, the ability of echolocation used by blind people to move independently is interesting. The aim of the article is to present echolocation as a method supporting spatial orientation of people with visual impairment. The article presents the results of empirical studies of echolocation. It also presents the benefits of using this ability in everyday life and signals research projects related to the methodology of teaching echolocation in Poland. People with visually impaired to get knowledge about the world around them use information from other senses. Many studies conducted with the participation of visually impaired people indicate that data obtained through hearing are extremely important. In this context, the ability of echolocation used by blind people to move independently is interesting. The aim of the article is to present echolocation as a method supporting spatial orientation of people with visual disabilities. The article presents the results of empirical studies on echolocation. It also presents the benefits of using this skill in everyday life and signals research projects on themethodology of teaching echolocation in Poland

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Limb ownership and voluntary action: human behavioral and neuroimaging studies

    Get PDF
    To be able to interact with our surroundings in a goal directed manner, we need to have sense what our body is made up of as well as a sense of being able to control our body. These two experiences, the sense of body ownership and the sense of agency, respectively, are fundamental to our self-perception but have historically not received any notable attention from the scientific community. This lack of interest probably stems from the fact that these experiences are phenomenologically thin in our everyday lives and that we cannot voluntarily turn them off, they are constantly there. However, for patients suffering from disturbances in the processes underlying these experiences, their importance becomes exceedingly clear. Lesions in the frontal, temporal or parietal lobe can lead to patients losing the sense of ownership of their limb (asomatognosia), and sometimes even attributing the limb to someone else (somatoparaphrenia). Similarly, patients suffering from lesions in the frontal lobe, parietal lobe or corpus callosum can experience a lack of control over their own hand (anarchic hand syndrome), while patients suffering from schizophrenia display difficulties in distinguishing self-generated from externally generated actions, implicating disturbances in the processes underlying the sense of agency. With the discovery of body illusions, combined with functional neuroimaging, it became possible to study the perceptual and neural mechanisms of the sense of body ownership in healthy volunteers. Studies using these illusions have elucidated the perceptual rules of body ownership as well as its neural correlates and has given rise to a number of different philosophical, neurocognitive and computational models of the sense of body ownership. Meanwhile, the sense of agency has mostly been studied disconnected from the sense of body ownership, focusing on agency over self-generated external sensory effects such as auditory tones. This thesis sought to bring these two experiences together and advance our knowledge of the perceptual and neural mechanisms underlying the sense of body ownership and the sense of agency as well how these two experiences interact. Studies I & II investigate certain aspects of the sense of body ownership, and in particular its relation to the visuo-proprioceptive recalibration of limb position often seen in bodily illusions. Study III investigated how this visuo-proprioceptive recalibration is related to voluntary, but unconscious movements. Study IV investigated the neural correlates of the sense of body ownership and agency as well as their interaction. In Study I, we present empirical evidence in favor of models where the subjective sense of limb ownership is not reliant on a visuo-proprioceptive recalibration of perceived limb position. In Study II, we show that the subjective sense of limb ownership and the visuo-proprioceptive recalibration of limb position have similar temporal decay curves, suggestive of a causal relationship between them. In Study III, we show that the increase in the recalibration of limb position seen in active movements is not dependent on conscious intention, action awareness or salient error signals, indicative of an unconscious efference copy-based mechanism. Finally, in Study IV, we identify brain regions in the frontal and parietal lobe which are associated with the sense of body ownership, while brain regions in the frontal and temporal lobe are associated with the sense of agency. We show that the sense of agency in the presence of a sense of body ownership (i.e., agency of bodily actions) is associated with increased activity in the primary sensory cortex, whereas the sense of agency in the absence of ownership (i.e., agency of external events) is associated with increased activity in the visual association cortex. Together, these findings shed light on the perceptual and neural mechanisms underlying the sense of body ownership and agency as well as their interaction
    corecore