30 research outputs found

    Co-occurrence of diabetes and hopelessness predicts adverse prognosis following percutaneous coronary intervention

    Get PDF
    We examined the impact of co-occurring diabetes and hopelessness on 3-year prognosis in percutaneous coronary intervention patients. Consecutive patients (n = 534) treated with the paclitaxel-eluting stent completed a set of questionnaires at baseline and were followed up for 3-year adverse clinical events. The incidence of 3-year death/non-fatal myocardial infarction was 3.5% in patients with no risk factors (neither hopelessness nor diabetes), 8.2% in patients with diabetes, 11.2% in patients with high hopelessness, and 15.9% in patients with both factors (p = 0.001). Patients with hopelessness (HR: 3.28; 95% CI: 1.49-7.23) and co-occurring diabetes and hopelessness (HR: 4.89; 95% CI: 1.86-12.85) were at increased risk of 3-year adverse clinical events compared to patients with no risk factors, whereas patients with diabetes were at a clinically relevant but not statistically significant risk (HR: 2.40; 95% CI: 0.82-7.01). These results remained, adjusting for baseline characteristics an

    Vision principles for harvest robotics : sowing artificial intelligence in agriculture

    No full text
    The objective of this work was to further advance technology in agriculture, specifically by pursuing the research direction of agricultural robotics for harvesting in greenhouses, with the specific use-case of Capsicum annuum, also known as sweet or bell pepper. Within this scope, it was previously determined that the primary cause of agricultural robotics not yet maturing was the complexity of the tasks due to inherent variations of the crops, in turn limiting performance in harvest success and time. As a solution, it was suggested to further enhance robotic systems with sensing, world modelling and reasoning, for example by pursuing approaches like machine learning and visual servo control. In this work, we have followed this suggestion. It was identified that facilitating new levels of artificial intelligence in the domains of sensing and motion control would be one of the ways to improve upon classical mechanization. Specifically, we investigated the means of using machine learning based computer vision guided manipulation towards a basic form of world representation and autonomy. For this, in Chapter 2 we developed an eye-in-hand sensing and visual control framework for dense crops with the goal to overcome issues of occlusion and image registration that were previously introduced when sensing was performed externally from the robot manipulator. Additionally, simultaneous localization and mapping was investigated to aid in forming a world model. In Chapter 3 we aimed to reduce the requirement of annotating empirical images by providing a method to synthetically generate large sets of automatically annotated images as input for convolutional neural network (CNN) based segmentation models. An annotated dataset was created of 10,500 synthetic and 50 empirical images. In Chapter 4 we further investigated how synthetic images can be used to bootstrap CNNs for successful learning of empirical images. We provided computer vision in agriculture a pioneering machine learning based methodology for state-of-the-art plant part segmentation performance, whilst simultaneously reducing the reliance on labor intensive manual annotations. Chapter 5 explored applying a cycle consistent generative adversarial network to our dataset with the objective to generate more realistic synthetic images by translating them to the feature distribution of the empirical domain. We show that this approach can further improve segmentation performance whilst further reducing the requirement of annotated empirical images. In Chapter 6 we aimed to bring all previous chapters into practice. The objective was to estimate angles between fruit and stems from image segmentations to support visual servo control grasping in a sweet-pepper harvesting robot. Our approach calculated angles under unmodified greenhouse conditions that met the accuracy requirement of 25 degrees for 73% of the cases. Combined, the work shows a promising stepping stone towards agricultural robotics which could ensure the quality of meals and nourishment of a growing population. Furthermore, it can become an important technology for societal issues in developed nations, e.g. by solving current labor problems. It can further improve upon the quality of life and contribute to reaching an exemplary equilibrium of sustainable agricultural production.</p

    Synthetic and Empirical Capsicum Annuum Image Dataset

    No full text
    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to facilitate bootstrapping agricultural semantic segmentation computer vision models with synthetic data that fine-tune and test on empirical images

    Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation

    Get PDF
    A modular software framework design that allows flexible implementation of eye-in-hand sensing and motion control for agricultural robotics in dense vegetation is reported. Harvesting robots in cultivars with dense vegetation require multiple viewpoints and on-line trajectory adjustments in order to reduce the amount of false negatives and correct for fruit movement. In contrast to specialised software, the framework proposed aims to support a wide variety of agricultural use cases, hardware and extensions. A set of Robotic Operating System (ROS) nodes was created to ensure modularity and separation of concerns, implementing functionalities for application control, robot motion control, image acquisition, fruit detection, visual servo control and simultaneous localisation and mapping (SLAM) for monocular relative depth estimation and scene reconstruction. Coordination functionality was implemented by the application control node with a finite state machine. In order to provide visual servo control and simultaneous localisation and mapping functionalities, off-the-shelf libraries Visual Servoing Platform library (ViSP) and Large Scale Direct SLAM (LSD-SLAM) were wrapped in ROS nodes. The capabilities of the framework are demonstrated by an example implementation for use with a sweet-pepper crop, combined with hardware consisting of a Baxter robot and a colour camera placed on its end-effector. Qualitative tests were performed under laboratory conditions using an artificial dense vegetation sweet-pepper crop. Results indicated the framework can be implemented for sensing and robot motion control in sweet-pepper using visual information from the end-effector. Future research to apply the framework to other use-cases and validate the performance of its components in servo applications under real greenhouse conditions is suggested

    Operational flow of an autonomous sweetpepper harvesting robot

    No full text
    Advanced automation is required for greenhouse production systems due to the lack of skilled workforce and increasing labour costs [1]. As part of the EU project SWEEPER, we are working on developing an autonomous robot able to harvest sweet pepper fruits in greenhouses. This paper focuses on the operational flow of the robot for the high level task planning. In the SWEEPER project, an RGB camera is mounted on the end effector to detect fruits. Due to the dense plant rows, the camera is located at a maximum of 40 cm from the plants and hence cannot provide an overview of all fruit locations. Only a few ripe fruits at each acquisition can be seen. This implies that the robot must incorporate a search pattern to look for fruits. When at least one fruit has been detected in the image, the search is aborted and a harvesting phase is initiated. The phase starts with directing the manipulator to a point close to the fruit and then activating a visual servo control loop. This motion approach ensures that the fruit is grasped despite the occlusions caused by the stems and leaves. When the manipulator has reached the fruit, it is harvested and automatically released into a container. If there are more fruits that have already been detected, the system continues to pick them. When all detected fruits have been harvested, the system resumes the search pattern again. When the search pattern is finished and no more fruits are detected, the robot base is advanced along the row to the next plant and repeats the operations above. To support implementation of the workflow into a program controlling the actual robot, a generic software framework for development of agricultural and forestry robots was used [2]. The framework is constructed with a hybrid robot architecture, using a state machine implementing the following flowchart

    Angle estimation between plant parts for grasp optimisation in harvest robots

    No full text
    For many robotic harvesting applications, position and angle between plant parts is required to optimally position the end-effector before attempting to approach, grasp and cut the product. A method for estimating the angle between plant parts, e.g. stem and fruit, is presented to support the optimisation of grasp pose for harvest robots. The hypothesis is that from colour images, this angle in the horizontal plane can be accurately derived under unmodified greenhouse conditions. It was hypothesised that the location of a fruit and stem could be inferred in the image plane from sparse semantic segmentations. The paper focussed on 4 sub-tasks for a sweet-pepper harvesting robot. Each task was evaluated under 3 conditions: laboratory, simplified greenhouse and unmodified greenhouse. The requirements for each task were based on the end-effector design that required a 25° positioning accuracy. In Task I, colour image segmentation for classes back-ground, fruit and stem plus wire was performed, meeting the requirement of an intersection-over-union > 0.58. In Task II, the stem pose was estimated from the segmentations. In Task III, centres of the fruit and stem were estimated from the output of previous tasks. Both centre estimations In Tasks II and III met the requirement of 25 pixel accuracy on average. In Task IV, the centres were used to estimate the angle between the fruit and stem, meeting the accuracy requirement of 25° for 73% of the cases. The work impacted on the harvest performance by increasing its success rate from 14% theoretically to 52% in practice under unmodified conditions.</p

    Erratum to ‘Angle estimation between plant parts for grasp optimisation in harvest robots’

    No full text
    The publisher regrets the error in the previously published highlights. Highlights for this paper should be as follows: • A method for estimating the angle between the plant stem and fruit is presented.• Colour image segmentation of plant parts is performed by deep learning.• Segmentations are used to localize plant part centres.• A geometric model estimates angles between plant parts.• The angles support grasp pose optimisation in a sweet-pepper harvesting robot.The publisher would like to apologise for any inconvenience caused.</p
    corecore