8,081 research outputs found

    Automated Global Feature Analyzer - A Driver for Tier-Scalable Reconnaissance

    Get PDF
    For the purposes of space flight, reconnaissance field geologists have trained to become astronauts. However, the initial forays to Mars and other planetary bodies have been done by purely robotic craft. Therefore, training and equipping a robotic craft with the sensory and cognitive capabilities of a field geologist to form a science craft is a necessary prerequisite. Numerous steps are necessary in order for a science craft to be able to map, analyze, and characterize a geologic field site, as well as effectively formulate working hypotheses. We report on the continued development of the integrated software system AGFA: automated global feature analyzerreg, originated by Fink at Caltech and his collaborators in 2001. AGFA is an automatic and feature-driven target characterization system that operates in an imaged operational area, such as a geologic field site on a remote planetary surface. AGFA performs automated target identification and detection through segmentation, providing for feature extraction, classification, and prioritization within mapped or imaged operational areas at different length scales and resolutions, depending on the vantage point (e.g., spaceborne, airborne, or ground). AGFA extracts features such as target size, color, albedo, vesicularity, and angularity. Based on the extracted features, AGFA summarizes the mapped operational area numerically and flags targets of "interest", i.e., targets that exhibit sufficient anomaly within the feature space. AGFA enables automated science analysis aboard robotic spacecraft, and, embedded in tier-scalable reconnaissance mission architectures, is a driver of future intelligent and autonomous robotic planetary exploration

    A 3D descriptor to detect task-oriented grasping points in clothing

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Manipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.Peer ReviewedPostprint (author's final draft

    Machine Vision System for Early-stage Apple Flowers and Flower Clusters Detection for Precision Thinning and Pollination

    Full text link
    Early-stage identification of fruit flowers that are in both opened and unopened condition in an orchard environment is significant information to perform crop load management operations such as flower thinning and pollination using automated and robotic platforms. These operations are important in tree-fruit agriculture to enhance fruit quality, manage crop load, and enhance the overall profit. The recent development in agricultural automation suggests that this can be done using robotics which includes machine vision technology. In this article, we proposed a vision system that detects early-stage flowers in an unstructured orchard environment using YOLOv5 object detection algorithm. For the robotics implementation, the position of a cluster of the flower blossom is important to navigate the robot and the end effector. The centroid of individual flowers (both open and unopen) was identified and associated with flower clusters via K-means clustering. The accuracy of the opened and unopened flower detection is achieved up to mAP of 81.9% in commercial orchard images

    Towards Autonomous Aviation Operations: What Can We Learn from Other Areas of Automation?

    Get PDF
    Rapid advances in automation has disrupted and transformed several industries in the past 25 years. Automation has evolved from regulation and control of simple systems like controlling the temperature in a room to the autonomous control of complex systems involving network of systems. The reason for automation varies from industry to industry depending on the complexity and benefits resulting from increased levels of automation. Automation may be needed to either reduce costs or deal with hazardous environment or make real-time decisions without the availability of humans. Space autonomy, Internet, robotic vehicles, intelligent systems, wireless networks and power systems provide successful examples of various levels of automation. NASA is conducting research in autonomy and developing plans to increase the levels of automation in aviation operations. This paper provides a brief review of levels of automation, previous efforts to increase levels of automation in aviation operations and current level of automation in the various tasks involved in aviation operations. It develops a methodology to assess the research and development in modeling, sensing and actuation needed to advance the level of automation and the benefits associated with higher levels of automation. Section II describes provides an overview of automation and previous attempts at automation in aviation. Section III provides the role of automation and lessons learned in Space Autonomy. Section IV describes the success of automation in Intelligent Transportation Systems. Section V provides a comparison between the development of automation in other areas and the needs of aviation. Section VI provides an approach to achieve increased automation in aviation operations based on the progress in other areas. The final paper will provide a detailed analysis of the benefits of increased automation for the Traffic Flow Management (TFM) function in aviation operations
    corecore