683 research outputs found

    Adaptive cancelation of self-generated sensory signals in a whisking robot

    Get PDF
    Sensory signals are often caused by one's own active movements. This raises a problem of discriminating between self-generated sensory signals and signals generated by the external world. Such discrimination is of general importance for robotic systems, where operational robustness is dependent on the correct interpretation of sensory signals. Here, we investigate this problem in the context of a whiskered robot. The whisker sensory signal comprises two components: one due to contact with an object (externally generated) and another due to active movement of the whisker (self-generated). We propose a solution to this discrimination problem based on adaptive noise cancelation, where the robot learns to predict the sensory consequences of its own movements using an adaptive filter. The filter inputs (copy of motor commands) are transformed by Laguerre functions instead of the often-used tapped-delay line, which reduces model order and, therefore, computational complexity. Results from a contact-detection task demonstrate that false positives are significantly reduced using the proposed scheme

    Adaptation of sensor morphology: an integrative view of perception from biologically inspired robotics perspective

    Get PDF
    Sensor morphology, the morphology of a sensing mechanism which plays a role of shaping the desired response from physical stimuli from surroundings to generate signals usable as sensory information, is one of the key common aspects of sensing processes. This paper presents a structured review of researches on bioinspired sensor morphology implemented in robotic systems, and discusses the fundamental design principles. Based on literature review, we propose two key arguments: first, owing to its synthetic nature, biologically inspired robotics approach is a unique and powerful methodology to understand the role of sensor morphology and how it can evolve and adapt to its task and environment. Second, a consideration of an integrative view of perception by looking into multidisciplinary and overarching mechanisms of sensor morphology adaptation across biology and engineering enables us to extract relevant design principles that are important to extend our understanding of the unfinished concepts in sensing and perceptionThis study was supported by the European Commission with the RoboSoft CA (A Coordination Action for Soft Robotics, contract #619319). SGN was supported by School of Engineering seed funding (2016), Malaysia Campus, Monash University

    \u3cem\u3eGRASP News\u3c/em\u3e, Volume 8, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory. Edited by Thomas Lindsay

    White paper - Agricultural Robotics: The Future of Robotic Agriculture

    Get PDF
    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over £108bn p.a., with 3.9m employees in a truly international industry and exports £20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). RAS and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, here we review the state of the art of the application of RAS in Agri-Food production and explore research and innovation needs to ensure novel advanced robotic and autonomous reach their full potential and deliver necessary impacts. The opportunities for RAS range from; the development of field robots that can assist workers by carrying weights and conduct agricultural operations such as crop and animal sensing, weeding and drilling; integration of autonomous system technologies into existing farm operational equipment such as tractors; robotic systems to harvest crops and conduct complex dextrous operations; the use of collaborative and “human in the loop” robotic applications to augment worker productivity and advanced robotic applications, including the use of soft robotics, to drive productivity beyond the farm gate into the factory and retail environment. RAS technology has the potential to transform food production and the UK has the potential to establish global leadership within the domain. However, there are particular barriers to overcome to secure this vision: 1.The UK RAS community with an interest in Agri-Food is small and highly dispersed. There is an urgent need to defragment and then expand the community.2.The UK RAS community has no specific training paths or Centres for Doctoral Training to provide trained human resource capacity within Agri-Food.3.While there has been substantial government investment in translational activities at high Technology Readiness Levels (TRLs), there is insufficient ongoing basic research in Agri-Food RAS at low TRLs to underpin onward innovation delivery for industry.4.There is a concern that RAS for Agri-Food is not realising its full potential, as the projects being commissioned currently are too few and too small-scale. RAS challenges often involve the complex integration of multiple discrete technologies (e.g. navigation, safe operation, multimodal sensing, automated perception, grasping and manipulation, perception). There is a need to further develop these discrete technologies but also to deliver large-scale industrial applications that resolve integration and interoperability issues. The UK community needs to undertake a few well-chosen large-scale and collaborative “moon shot” projects.5.The successful delivery of RAS projects within Agri-Food requires close collaboration between the RAS community and with academic and industry practitioners. For example, the breeding of crops with novel phenotypes, such as fruits which are easy to see and pick by robots, may simplify and accelerate the application of RAS technologies. Therefore, there is an urgent need to seek new ways to create RAS and Agri-Food domain networks that can work collaboratively to address key challenges. This is especially important for Agri-Food since success in the sector requires highly complex cross-disciplinary activity. Furthermore, within UKRI most of the Research Councils (EPSRC, BBSRC, NERC, STFC, ESRC and MRC) and Innovate UK directly fund work in Agri-Food, but as yet there is no coordinated and integrated Agri-Food research policy per se. Our vision is a new generation of smart, flexible, robust, compliant, interconnected robotic systems working seamlessly alongside their human co-workers in farms and food factories. Teams of multi-modal, interoperable robotic systems will self-organise and coordinate their activities with the “human in the loop”. Electric farm and factory robots with interchangeable tools, including low-tillage solutions, novel soft robotic grasping technologies and sensors, will support the sustainable intensification of agriculture, drive manufacturing productivity and underpin future food security. To deliver this vision the research and innovation needs include the development of robust robotic platforms, suited to agricultural environments, and improved capabilities for sensing and perception, planning and coordination, manipulation and grasping, learning and adaptation, interoperability between robots and existing machinery, and human-robot collaboration, including the key issues of safety and user acceptance. Technology adoption is likely to occur in measured steps. Most farmers and food producers will need technologies that can be introduced gradually, alongside and within their existing production systems. Thus, for the foreseeable future, humans and robots will frequently operate collaboratively to perform tasks, and that collaboration must be safe. There will be a transition period in which humans and robots work together as first simple and then more complex parts of work are conducted by robots; driving productivity and enabling human jobs to move up the value chain

    Agricultural Robotics:The Future of Robotic Agriculture

    Get PDF

    Neuromorphic tactile sensor array based on fiber Bragg gratings to encode object qualities

    Get PDF
    Emulating the sense of touch is fundamental to endow robotic systems with perception abilities. This work presents an unprecedented mechanoreceptor-like neuromorphic tactile sensor implemented with fiber optic sensing technologies. A robotic gripper was sensorized using soft and flexible tactile sensors based on Fiber Bragg Grating (FBG) transducers and a neuro-bio-inspired model to extract tactile features. The FBGs connected to the neuron model emulated biological mechanoreceptors in encoding tactile information by means of spikes. This conversion of inflowing tactile information into event-based spikes has an advantage of reduced bandwidth requirements to allow communication between sensing and computational subsystems of robots. The outputs of the sensor were converted into spiking on-off events by means of an architecture implemented in a Field Programmable Gate Array (FPGA) and applied to robotic manipulation tasks to evaluate the effectiveness of such information encoding strategy. Different tasks were performed with the objective to grant fine manipulation abilities using the features extracted from the grasped objects (i.e., size and hardness). This is envisioned to be a futuristic sensor technology combining two promising technologies: optical and neuromorphic sensing

    In silico case studies of compliant robots: AMARSI deliverable 3.3

    Get PDF
    In the deliverable 3.2 we presented how the morphological computing ap- proach can significantly facilitate the control strategy in several scenarios, e.g. quadruped locomotion, bipedal locomotion and reaching. In particular, the Kitty experimental platform is an example of the use of morphological computation to allow quadruped locomotion. In this deliverable we continue with the simulation studies on the application of the different morphological computation strategies to control a robotic system

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions

    Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics

    Get PDF
    open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin
    • …
    corecore