14 research outputs found

    Active people recognition using thermal and grey images on a mobile security robot

    Get PDF
    In this paper we present a vision-based approach to detect, track and identify people on a mobile robot in real time. While most vision systems for tracking people on mobile robots use skin color information, we present an approach using thermal images and a fast contour model together with a particle filter. With this method a person can be detected independently from current light conditions and in situations where no skin color is visible (the person is not close or does not face the robot). Tracking in thermal images is used as an attention system to get an estimate of the position of a person. Based on this estimate we use a pan-tilt camera to zoom to the expected face region and apply a fast face tracker in combination with face recognition to identify the person

    L*a*b*fruits : a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks

    Get PDF
    Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1 : 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1 : 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity

    The effectiveness of integrating educational robotic activities into higher education Computer Science curricula: a case study in a developing country

    Get PDF
    In this paper, we present a case study to investigate the effects of educational robotics on a formal undergraduate Computer Science education in a developing country. The key contributions of this paper include a longitudinal study design, spanning the whole duration of one taught course, and its focus on continually assessing the effectiveness and the impact of robotic-based exercises. The study assessed the students' motivation, engagement and level of understanding in learning general computer programming. The survey results indicate that there are benefits which can be gained from such activities and educational robotics is a promising tool in developing engaging study curricula. We hope that our experience from this study together with the free materials and data available for download will be beneficial to other practitioners working with educational robotics in different parts of the world

    An Agricultural Precision Sprayer Deposit Identification System

    Get PDF
    Engineering and Physical Sciences Research Council [EP/S023917/1]

    Using Additional Moderator to Control the Footprint of a COSMOS Rover for Soil Moisture Measurement

    Get PDF
    Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large (>100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation

    Integrating mobile robotics and vision with undergraduate computer science

    No full text

    A portable navigation system with an adaptive multimodal interface for the blind

    No full text
    Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind and partially sighted. Nowadays almost every smart-phone and tablet is equipped with high-resolution cameras, typically used for photos, videos, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the "Active Vision with Human-in-the-Loop for the Visually Impaired" (ActiVis) project aims to develop a novel electronic travel aid to tackle the "last 10 yards problem" and enable blind users to independently navigate in unknown environments, ultimately enhancing or replacing existing solutions such as guide dogs and white canes. This paper describes some of the project's key challenges, in particular with respect to the design of a user interface (UI) that translates visual information from the camera to guidance instructions for the blind person, taking into account the limitations introduced by visual impairment. In this paper we also propose a multimodal UI that caters to the needs of the visually impaired that exploits human-machine progressive co-adaptation to enhance the user's experience and improve navigation performance

    Feasibility Study of In-Field Phenotypic Trait Extraction for Robotic Soft-Fruit Operations

    Get PDF
    There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN

    Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments

    No full text
    Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this article is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts's Law (a predictive model for human movement) provides a suitable metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids

    Bone-conduction audio interface to guide people with visual impairments

    No full text
    The ActiVis project’s aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target’s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone’s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants
    corecore