928 research outputs found

    A New Approach to Visual-Based Sensory System for Navigation into Orange Groves

    Get PDF
    One of the most important parts of an autonomous robot is to establish the path by which it should navigate in order to successfully achieve its goals. In the case of agricultural robotics, a procedure that determines this desired path can be useful. In this paper, a new virtual sensor is introduced in order to classify the elements of an orange grove. This proposed sensor will be based on a color CCD camera with auto iris lens which is in charge of doing the captures of the real environment and an ensemble of neural networks which processes the capture and differentiates each element of the image. Then, the Hough’s transform and other operations will be applied in order to extract the desired path from the classification performed by the virtual sensory system. With this approach, the robotic system can correct its deviation with respect to the desired path. The results show that the sensory system properly classifies the elements of the grove and can set trajectory of the robot

    Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves

    Get PDF
    The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot

    Two-stage procedure based on smoothed ensembles of neural networks applied to weed detection in orange groves

    Get PDF
    The potential impacts of herbicide utilization compel producers to use new methods of weed control. The problem of how to reduce the amount of herbicide and yet maintain crop production has stimulated many researchers to study selective herbicide application. The key of selective herbicide application is how to discriminate the weed areas efficiently. We introduce a procedure for weed detection in orange groves which consists of two different stages. In the first stage, the main features in an image of the grove are determined (Trees, Trunks, Soil and Sky). In the second, the weeds are detected only in those areas which were determined as Soil in the first stage. Due to the characteristics of weed detection (changing weather and light conditions), we introduce a new training procedure with noisy patterns for ensembles of neural networks. In the experiments, a comparison of the new noisy learning was successfully performed with a set of well-known classification problems from the machine learning repository published by the University of California, Irvine. This first comparison was performed to determine the general behavior and performance of the noisy ensembles. Then, the new noisy ensembles were applied to images from orange groves to determine where weeds are located using the proposed two-stage procedure. Main results of this contribution show that the proposed system is suitable for weed detection in orange, and similar, groves

    Brainstem plasticity in vestibular motion-processing sensorimotor networks

    Get PDF

    Understanding Orientation and Mobility learning and teaching for primary students with vision impairment: a qualitative inquiry

    Get PDF
    Orientation and Mobility is a uniquely crafted pedagogical practice blending specific microteaching skills to enable students with vision impairment to achieve functional interpretation of extra-personal and peri-personal space. Linked to student wellbeing, social participation, employment and self-determination, Orientation and Mobility is a cornerstone of equity and access for students with vision impairment. Despite this, in mainstream primary education little is known about Orientation and Mobility learning and teaching and how it aligns with the Australian Curriculum. Orientation and Mobility learning and teaching is examined from the perspectives of three female primary school students with vision impairment, a parent, a teacher, the researcher, and a panel of Orientation and Mobility specialists. These perspectives are interwoven with a detailed reflexive interrogation of the Orientation and Mobility lessons over one school semester within the contexts of the Far North and North Queensland Department of Education regions and the Australian Curriculum. This study explores how one Queensland Orientation and Mobility teacher, the researcher, explicitly communicates nonvisual, visual, tactile, and auditory concepts to primary school students with vision impairment. Drawing on Bronfenbrenner's bioecological systems theory, the Orientation and Mobility learning experiences are captured through an interpretative methodology comprising narrative inquiry and autoethnography, both underpinned by hermeneutic phenomenology. Insider researcher data are gathered from semi structured interviews, online panel responses, and audio recordings of the Orientation and Mobility lessons. Autoethnographic field notes, document materials, and reflexive teaching journals are used to support the thematic and discourse analysis. Results confirm that for the non-expert participants there was a substantial lack of awareness of the impact of vision impairment on learning and development, and the potential contribution of Orientation and Mobility. Systemic and cultural barriers to equitable inclusive education for these North and Far North Department of Education students with vision impairment were uncovered. Orientation and Mobility learning and teaching was clearly shown to overlap with and embed content from the Australian Curriculum. A key finding was the isolation of a core set of micro-teaching skills pertinent to Orientation and Mobility learning and teaching. These skills were identified as: Orientation and Mobility teacher attention to dialogic language and feedback, extended interaction wait times, and shared attention to spatial and contextual environments within the Orientation and Mobility lesson. As this skill set can be used to design Orientation and Mobility learning and teaching experiences that explicitly scaffold the development of non-visual, visual, tactile, auditory, and kinaesthetic pre-cursor concepts, it was given the appropriated name of practice architecture. An important practical outcome of the research was the formulation of an ontogenetic model of Orientation and Mobility learning and teaching. This model, which closely follows the natural development of each student with vision impairment, may serve as a tool that enables teachers to more systematically chart the biophysical attributes of the student with vision impairment. It thereby provides a learning and teaching framework for designing interactions with students with vision impairment. The ontogenetic framework has the potential to facilitate greater integration of what–and–how learning occurs in Orientation and Mobility with what–and–how learning might occur in the regular classroom

    Evidence of Olfactory and Visual Learning in The Asian Citrus Psyllid, Diaphorina citri Kuwayama (Hemiptera: Psyllidae)

    Get PDF
    Investigation of the mechanisms underlying learning and memory can be achieved through research on neurobiologically simplified invertebrate species. As such, insects have been used for decades as ideal models of olfactory learning. The current study aimed to investigate the mechanisms of chemosensory attraction in an invasive insect, Diaphorina citri, the Asian citrus psyllid (ACP), through manipulation of olfactory stimuli. After classical conditioning to a non-innate cue (vanilla extract), psyllids displayed enhanced feeding behavior. There was, however, an inverse relationship between olfactory “noise” and feeding behavior. Preliminary data suggests ACP may also be visual learners, as evidenced by trials attempting to condition ACP to the color blue. The data indicate that while learning is possible in ACP, it is easily disrupted. As a result, innate response to host plant stimuli in oligophagous, selective feeding insects may represent the most adaptive means of locating resources

    Robots in Agriculture: State of Art and Practical Experiences

    Get PDF
    The presence of robots in agriculture has grown significantly in recent years, overcoming some of the challenges and complications of this field. This chapter aims to collect a complete and recent state of the art about the application of robots in agriculture. The work addresses this topic from two perspectives. On the one hand, it involves the disciplines that lead the automation of agriculture, such as precision agriculture and greenhouse farming, and collects the proposals for automatizing tasks like planting and harvesting, environmental monitoring and crop inspection and treatment. On the other hand, it compiles and analyses the robots that are proposed to accomplish these tasks: e.g. manipulators, ground vehicles and aerial robots. Additionally, the chapter reports with more detail some practical experiences about the application of robot teams to crop inspection and treatment in outdoor agriculture, as well as to environmental monitoring in greenhouse farming

    Travel Routes and Spatial Abilities in Wild Chacma Baboons (Papio ursinus)

    Get PDF
    The primary objective of this research was to give insight into the spatial cognitive abilities of chacma baboons (Papio ursinus) and to address the question whether chacma baboons internally represent spatial information of large-scale space in the form of a so-called topological map or a Euclidean map. Navigating the environment using a topological map envisions that animals acquire, remember and integrate a set of interconnected pathways or route segments that are linked by frequently used landmarks or nodes, at which animals make travel decisions. When animals navigate using a Euclidean map, animals encode information in the form of true angles and distances in order to compute novel routes or shortcuts to reach out of view goals. Although findings of repeatedly used travel routes are generally considered evidence that animals possess topological-based spatial awareness, it is not necessarily evidence that they navigate (solely) using a topological map or lack complete Euclidean spatial representation. Therefore, three predictions from the hypothesised use of a topological map and Euclidean map were tested to distinguish between them. It was investigated whether there was a difference in travel linearity between the core area and the periphery of the home range, whether travel goals were approached from all directions or from one (or a few) distinct directions using the same approach routes and lastly, whether there was a difference between the initial leaving direction from a travel goal and the general direction towards the next goal. Data were collected during a 19-month period (04/2007-11/2008) at Lajuma research centre in the Soutpansberg (Limpopo Province, South Africa). A group of baboons were followed from their morning sleeping site to their evening sleeping site for 234 days, during which location records, behavioural data and important resource data were recorded. A statistical procedure termed the change-point test (CPT) was employed to identify locations at which baboons started orienting towards a goal and baboons showed goal-directed travel towards identified travel goals. Subsequently, hotspot analysis was employed to delineate clusters of such change-points, termed ‘decision hotspots’. Decision hotspots coincided with highly valuable resources, towards which baboons showed significantly faster travel. It thus seemed that they ‘knew’ when they were nearing their goals and adapted their speed accordingly. Decision hotspots were also located at navigational landmarks that delineated a network of repeatedly used travel routes characteristic of a topological map. Therewith, this method reveals an important utility to the study of decision-making by allowing a range of sites to be selected for detailed observations, which were previously limited to sleeping sites or ‘stop’ sites, which would be impossible if the decision hotspots had not been previously identified. Furthermore, baboons travelled as efficiently in the periphery as in the core area of their home range, which was suggested to be more consistent with Euclidean spatial awareness. However, comparatively low travel linearity throughout the home range revealed it is more likely that the baboons accumulated a similar knowledge of the periphery as of the core area, which allowed them to navigate with a similar efficiently through both areas. The mountainous terrain at the study site provided ample prominent landmarks to aid the baboons in navigation and allowed baboons to initiate navigation to a travel goal with the same direction as when they reached that goal. Baboons did not approach travel goals from all directions, but instead they approached their goals from the same direction(s). In conclusion, the findings of this research are more consistent with the use of a topological spatial representation of large scale space, where landmarks aid baboons to navigate efficiently through large scale space. A review of the literature shows that until date, evidence for the existence of Euclidean spatial representation in both animals and humans is extremely limited and often unconvincing. It is likely that a high level of experimental control is necessary to unambiguously demonstrate the existence of Euclidean spatial awareness in the future

    Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion

    Get PDF
    Agricultural mobile robots have great potential to effectively implement different agricultural tasks. They can save human labour costs, avoid the need for people having to perform risky operations and increase productivity. Automation and advanced sensing technologies can provide up-to-date information that helps farmers in orchard management. Data collected from on-board sensors on a mobile robot provide information that can help the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development. In orchards, trees are natural landmarks providing suitable cues for mobile robot localisation and navigation as trees are nominally planted in straight and parallel rows. This thesis presents a novel tree trunk detection algorithm that detects trees and discriminates between trees and non-tree objects in the orchard using a camera and 2D laser scanner data fusion. A local orchard map of the individual trees was developed allowing the mobile robot to navigate to a specific tree in the orchard to perform a specific task such as tree inspection. Furthermore, this thesis presents a localisation algorithm that does not rely on GPS positions and depends only on the on-board sensors of the mobile robot without adding any artificial landmarks, respective tapes or tags to the trees. The novel tree trunk detection algorithm combined the features extracted from a low cost camera's images and 2D laser scanner data to increase the robustness of the detection. The developed algorithm used a new method to detect the edge points and determine the width of the tree trunks and non-tree objects from the laser scan data. Then a projection of the edge points from the laser scanner coordinates to the image plane was implemented to construct a region of interest with the required features for tree trunk colour and edge detection. The camera images were used to verify the colour and the parallel edges of the tree trunks and non-tree objects. The algorithm automatically adjusted the colour detection parameters after each test which was shown to increase the detection accuracy. The orchard map was constructed based on tree trunk detection and consisted of the 2D positions of the individual trees and non-tree objects. The map of the individual trees was used as an a priority map for mobile robot localisation. A data fusion algorithm based on an Extended Kalman filter was used for pose estimation of the mobile robot in different paths (midway between rows, close to the rows and moving around trees in the row) and different turns (semi-circle and right angle turns) required for tree inspection tasks. The 2D positions of the individual trees were used in the correction step of the Extended Kalman filter to enhance localisation accuracy. Experimental tests were conducted in a simulated environment and a real orchard to evaluate the performance of the developed algorithms. The tree trunk detection algorithm was evaluated under two broad illumination conditions (sunny and cloudy). The algorithm was able to detect the tree trunks (regular and thin tree trunks) and discriminate between trees and non-tree objects with a detection accuracy of 97% showing that the fusion of both vision and 2D laser scanner technologies produced robust tree trunk detection. The mapping method successfully localised all the trees and non-tree objects of the tested tree rows in the orchard environment. The mapping results indicated that the constructed map can be reliably used for mobile robot localisation and navigation. The localisation algorithm was evaluated against the logged RTK-GPS positions for different paths and headland turns. The average of the RMS of the position error in x, y coordinates and Euclidean distance were 0.08 m, 0.07 m and 0.103 m respectively, whilst the average of the RMS of the heading error was 3:32°. These results were considered acceptable while driving along the rows and when executing headland turns for the target application of autonomous mobile robot navigation and tree inspection tasks in orchards

    Autonomous Robots’ Visual Perception in Underground Terrains using Statistical Region Merging

    Get PDF
    Robots’ visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots’ perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains
    corecore