3,799 research outputs found

    Trajectory solutions for a game-playing robot using nonprehensile manipulation methods and machine vision

    Get PDF
    The need for autonomous systems designed to play games, both strategy-based and physical, comes from the quest to model human behaviour under tough and competitive environments that require human skill at its best. In the last two decades, and especially after the 1996 defeat of the world chess champion by a chess-playing computer, physical games have been receiving greater attention. Robocup TM, i.e. robotic football, is a well-known example, with the participation of thousands of researchers all over the world. The robots created to play snooker/pool/billiards are placed in this context. Snooker, as well as being a game of strategy, also requires accurate physical manipulation skills from the player, and these two aspects qualify snooker as a potential game for autonomous system development research. Although research into playing strategy in snooker has made considerable progress using various artificial intelligence methods, the physical manipulation part of the game is not fully addressed by the robots created so far. This thesis looks at the different ball manipulation options snooker players use, like the shots that impart spin to the ball in order to accurately position the balls on the table, by trying to predict the ball trajectories under the action of various dynamic phenomena, such as impacts. A 3-degree of freedom robot, which can manipulate the snooker cue on a par with humans, at high velocities, using a servomotor, and position the snooker cue on the ball accurately with the help of a stepper drive, is designed and fabricated. [Continues.

    Mani-GPT: A Generative Model for Interactive Robotic Manipulation

    Full text link
    In real-world scenarios, human dialogues are multi-round and diverse. Furthermore, human instructions can be unclear and human responses are unrestricted. Interactive robots face difficulties in understanding human intents and generating suitable strategies for assisting individuals through manipulation. In this article, we propose Mani-GPT, a Generative Pre-trained Transformer (GPT) for interactive robotic manipulation. The proposed model has the ability to understand the environment through object information, understand human intent through dialogues, generate natural language responses to human input, and generate appropriate manipulation plans to assist the human. This makes the human-robot interaction more natural and humanized. In our experiment, Mani-GPT outperforms existing algorithms with an accuracy of 84.6% in intent recognition and decision-making for actions. Furthermore, it demonstrates satisfying performance in real-world dialogue tests with users, achieving an average response accuracy of 70%

    Autonomous Lionfish Harvester

    Get PDF
    The Lionfish Major Qualifying Project Team has developed a harvesting mechanism for the purpose of hunting lionfish, with the intent of attaching it to an autonomous submarine robot. The harvester functions as an independent mechanism capable of sensing lionfish, determining their location, and harvesting them

    Teaching iCub to recognize objects using deep convolutional neural networks

    Get PDF
    Providing robots with accurate and robust visual recognition capabilities in the real-world today is a challenge which prevents the use of autonomous agents for concrete applications. Indeed, the majority of tasks, as manipulation and interaction with other agents, critically depends on the ability to visually recognize the entities involved in a scene. At the same time, computer vision systems based on deep Convolutional Neural Networks (CNNs) are marking a breakthrough in fields as largescale image classification and retrieval. In this work we investigate how latest results on deep learning can advance the visual recognition capabilities of a robotic platform (the iCub humanoid robot) in a real-world scenario. We benchmark the performance of the resulting system on a new dataset of images depicting 28 objects, named iCubWorld28, that we plan on releasing. As in the spirit of the iCubWorld dataset series, this has been collected in a framework reflecting the typical iCub\u2019s daily visual experience. Moreover, in this release we provide four different acquisition sessions, to test incremental learning capabilities over multiple days. Our study addresses the question: how many objects can the iCub recognize today

    A Training Framework of Robotic Operation and Image Analysis for Decision-Making in Bridge Inspection and Preservation

    Get PDF
    This project aims to create a framework of training engineers and policy makers on robotic operation and image analysis for the inspection and preservation of transportation infrastructure. Specifically, it develops the method for collecting camera-based bridge inspection data and the algorithms for data processing and pattern recognitions; and it creates tools for assisting users on visually analyzing the processed image data and recognized patterns for inspection and preservation decision-making. The project first developed a Siamese Neural Network to support bridge engineers in analyzing big video data. The network was initially trained by one-shot learning and is fine-tuned iteratively with human in the loop. Bridge engineers define the region of interest initially, then the algorithm retrieves all related regions in the video, which facilitates the engineers to inspect the bridge rather than exhaustively check every frame of the video. Our neural network was evaluated on three bridge inspection videos with promising performances. Then, the project developed an assistive intelligence system to facilitate inspectors efficiently and accurately detect and segment multiclass bridge elements from inspection videos. A Mask Region-based Convolutional Neural Network was transferred in the studied problem with a small initial training dataset labeled by the inspector. Then, the temporal coherence analysis was used to recover false negative detections of the transferred network. Finally, self-training with a guidance from experienced inspectors was used to iteratively refine the network. Results from a case study have demonstrated that the proposed method uses just a small amount of time and guidance from experienced inspectors to successfully build the assistive intelligence system with an excellent performance
    corecore