218,519 research outputs found

    Automatic Romaine Heart Harvester

    Get PDF
    The Romaine Robotics Senior Design Team developed a romaine lettuce heart trimming system in partnership with a Salinas farm to address a growing labor shortage in the agricultural industry that is resulting in crops rotting in the field before they could be harvested. An automated trimmer can alleviate the most time consuming step in the cut-trim-bag harvesting process, increasing the yields of robotic cutters or the speed of existing laborer teams. Leveraging the Partner Farm’s existing trimmer architecture, which consists of a laborer loading lettuce into sprungloaded grippers that are rotated through vision and cutting systems by an indexer, the team redesigned geometry to improve the loading, gripping, and ejection stages of the system. Physical testing, hand calculations, and FEA were performed to understand acceptable grip strengths and cup design, and several wooden mockups were built to explore a new actuating linkage design for the indexer. The team manufactured, assembled, and performed verification testing on a full-size metal motorized prototype that can be incorporated with the Partner Farm’s existing cutting and vision systems. The prototype met all of the established requirements, and the farm has implemented the redesign onto their trimmer. Future work would include designing and implementing vision and cutting systems for the team’s metal prototype

    Supporting People with Vision Impairments in Automated Vehicles: Challenge and Opportunities

    Full text link
    Autonomous and automated vehicles (AVs) will provide many opportunities for mobility and independence for peoplewith vision impairments (PwVI). This project provides insights on the challenges and potential barriers to their adoptionof AVs. We examine adoption and use of ridesharing services. We study ridesharing as a proxy for AVs as they are asimilar means of single-rider transportation for PwVI through observations and interviews. We also investigateperceptions towards autonomous vehicles and prototypes to address perceived barriers to AV use through design focusgroups with blind and low vision people. From these studies, we provide recommendations to AV manufacturers andsuppliers for how to best design vehicles and interactive systems that people with vision impairments trust.United States Department of Transportationhttps://deepblue.lib.umich.edu/bitstream/2027.42/156054/3/Supporting People with Vision Impairments in Automated Vehicles - Challenges and Opportunities.pd

    A Flexible and Robust Vision Trap for Automated Part Feeder Design

    Full text link
    Fast, robust, and flexible part feeding is essential for enabling automation of low volume, high variance assembly tasks. An actuated vision-based solution on a traditional vibratory feeder, referred to here as a vision trap, should in principle be able to meet these demands for a wide range of parts. However, in practice, the flexibility of such a trap is limited as an expert is needed to both identify manageable tasks and to configure the vision system. We propose a novel approach to vision trap design in which the identification of manageable tasks is automatic and the configuration of these tasks can be delegated to an automated feeder design system. We show that the trap's capabilities can be formalized in such a way that it integrates seamlessly into the ecosystem of automated feeder design. Our results on six canonical parts show great promise for autonomous configuration of feeder systems.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022

    CAGD based 3-D visual recognition

    Get PDF
    Journal ArticleA coherent automated manufacturing system needs to include CAD/CAM, computer vision, and object manipulation. Currently, most systems which support CAD/CAM do not provide for vision or manipulation and similarly, vision and manipulation systems incorporate no explicit relation to CAD/CAM models. CAD/CAM systems have emerged which allow the designer to conceive and model an object and automatically manufacture the object to the prescribed specifications. If recognition or manipulation is to be performed, existing vision systems rely on models generated in an ad hoc manner for the vision or recognition process. Although both Vision and CAD/CAM systems rely on models of the objects involved, different modeling schemes are used in each case. A more unified system will allow vision models to be generated from the CAD database. We are implementing a framework in which objects are designed using an existing CAGD system and recognition strategies based on these design models are used for visual recognition and manipulation. An example of its application is given

    CAGD based 3-D visual recognition

    Get PDF
    Journal ArticleA coherent automated manufacturing system needs to include CAD/CAM, computer vision, and object manipulation. Currently, most systems which support CAD/CAM do not provide for vision or manipulation and similarly, vision and manipulation systems incorporate no explicit relation to CAD/CAM models. CAD/CAM systems have emerged which allow the designer to conceive and model an object and automatically manufacture the object to the prescribed specifications. !f recognition or manipulation is to be performed, existing vision systems rely on models generated in an ad hoc manner for the vision or recognition process. Although both Vision and CAD/CAM systems rely on models of the objects involved, different modeling schemes are used in each case. A more unified system will allow vision models to be generated from the CAD database. We are implementing a framework in which objects are designed using an existing CAGD system and recognition strategies based on these design models are used for visual recognition and manipulation. An example of its application is given

    Large Scale Visual Recommendations From Street Fashion Images

    Full text link
    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. Finally, we also outline a large-scale annotated data set of fashion images (Fashion-136K) that can be exploited for future vision research

    End-to-End Learning Via a Convolutional Neural Network for Cancer Cell Line Classification

    Get PDF
    Purpose: Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification. Design/methodology/approach: The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types. Findings: They obtained a 99% accuracy, providing a foundation for more comprehensive systems. Originality/value: Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis

    Semi-Automated SVG Programming via Direct Manipulation

    Full text link
    Direct manipulation interfaces provide intuitive and interactive features to a broad range of users, but they often exhibit two limitations: the built-in features cannot possibly cover all use cases, and the internal representation of the content is not readily exposed. We believe that if direct manipulation interfaces were to (a) use general-purpose programs as the representation format, and (b) expose those programs to the user, then experts could customize these systems in powerful new ways and non-experts could enjoy some of the benefits of programmable systems. In recent work, we presented a prototype SVG editor called Sketch-n-Sketch that offered a step towards this vision. In that system, the user wrote a program in a general-purpose lambda-calculus to generate a graphic design and could then directly manipulate the output to indirectly change design parameters (i.e. constant literals) in the program in real-time during the manipulation. Unfortunately, the burden of programming the desired relationships rested entirely on the user. In this paper, we design and implement new features for Sketch-n-Sketch that assist in the programming process itself. Like typical direct manipulation systems, our extended Sketch-n-Sketch now provides GUI-based tools for drawing shapes, relating shapes to each other, and grouping shapes together. Unlike typical systems, however, each tool carries out the user's intention by transforming their general-purpose program. This novel, semi-automated programming workflow allows the user to rapidly create high-level, reusable abstractions in the program while at the same time retaining direct manipulation capabilities. In future work, our approach may be extended with more graphic design features or realized for other application domains.Comment: In 29th ACM User Interface Software and Technology Symposium (UIST 2016

    Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery

    Get PDF
    Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together
    corecore